AI & the Future of Home Automation: Resideo 2024 Innovation Speaker Series


Hello my new Resideo friends!
Thank you so much for allowing me to share the time on the December Product Innovation Series call with you! Thank you for your patience while you waited for this message, I didn’t want it to get buried in OOTO mail over the holidays. I hope everyone enjoyed a little time off.

Now that we’re back at work, as promised, I made this page with a few things which may be helpful.

– the AI NOTES from my session from Otter.ai
– a downloadable
PDF SUMMARY of the most important slides from my talk
– some brief answers to
UNADDRESSED QUESTIONS from the chat.

Dan Chuparkoff

A.I. & Innovation Keynote Speaker
CEO, Reinvention Labs
AI Educator & Innovation Expert from
Google, McKinsey, & Atlassian
dan@chuparkoff.com | 1.312.869.9777

PDF SUMMARY from Dan’s AI talk:

Download the 3pg high resolution PDF here:

AI NOTES from Dan’s AI Keynote

*These notes were created automatically by Otter.ai. Transcription errors or mistakes have not been fixed
in order to demonstrate the current state of AI notes.


Summary

Dan Chuparkoff discussed the impact of AI on home automation and innovation during the Resideo Innovation Speaker Series. He emphasized that AI is not a search engine but a content creation assistant, highlighting its role in personalized automation, voice control, predictive maintenance, energy optimization, and security detection. Chuparkoff explained AI's limitations, such as its inability to discover new information or understand human hopes and biases. He also discussed the importance of using AI as an assistant rather than a co-pilot and the potential for AI to democratize language translation, enhancing global collaboration. The session concluded with a Q&A addressing AI's potential to displace jobs and the benefits of using AI tools.

Action Items

[ ] Ask ChatGPT for 10 ways AI might change the work of people in the home automation industry.

[ ] Scan the list of AI-generated suggestions and decide which ones are most relevant.

[ ] Explore the use of AI-powered note-taking and transcription tools, such as Otter AI, to streamline communication and collaboration.

[ ] Investigate the availability of paid versions of AI tools and assess whether the additional features are worth the cost.

Outline

Introduction and Overview of the Meeting

  • Speaker 1, Pat, introduces the Resideo Innovation Speaker Series and welcomes participants.

  • Dan Chuparkoff is introduced as the special guest, focusing on AI and the future of innovation.

  • Pat explains the structure of the meeting, including a Q&A session at the end.

  • Dan Chuparkoff is introduced as an innovation expert with over 30 years of experience in tech.

Dan Chuparkoff's Background and AI's Impact

  • Dan Chuparkoff shares his background, including his work at companies like Atlassian, Google, and McKinsey.

  • He emphasizes the importance of adapting to technology and the impact of AI on various industries.

  • Dan discusses the comparison of AI to fire, highlighting its potential and challenges.

  • He mentions his recent departure from Google to focus on helping teams reinvent their work with AI.

Understanding AI and Its Misconceptions

  • Dan explains the common misconceptions about AI, using an example from a blog post.

  • He discusses the importance of having a common language for AI discussions.

  • Dan shares an anecdote about Carl Bass and the misuse of the term "robot" in consumer products.

  • He reflects on his personal journey with technology and its impact on his career.

The Evolution of AI and Its Technological Impact

  • Dan outlines the major technological changes over the past 30 years, including the PC, the internet, BlackBerry, AWS, and remote work.

  • He compares the current AI revolution to a "pinata" with various valuable contents inside.

  • Dan emphasizes the need for specificity in AI discussions to avoid confusion.

  • He introduces the concept of GPT (Generative Pre-trained Transformers) and its components: generative, pre-trained, and transformative.

AI's Functionality and Limitations

  • Dan explains how AI works by creating content one word at a time, using probability and confidence.

  • He discusses the concept of "hallucinations" in AI and how to navigate them.

  • Dan compares AI to autocorrect in text messages, emphasizing the importance of human judgment.

  • He shares an example of AI's limitations, using a pizza-making scenario from the internet.

AI Tools and Their Applications

  • Dan introduces various AI tools, including ChatGPT, Claude, Gemini, and Llama.

  • He explains the strengths and applications of each tool, such as privacy, image and text processing, and customization.

  • Dan highlights the proliferation of AI tools and their impact on various industries.

  • He emphasizes the importance of using AI as an assistant rather than a co-pilot or boss.

AI's Role in Home Automation and Product Development

  • Dan encourages participants to ask AI specific questions about their industry, such as home automation.

  • He lists potential AI-driven improvements in home automation, including personalized automation, voice control, predictive maintenance, and energy optimization.

  • Dan emphasizes the importance of continuous learning and adapting to new AI capabilities.

  • He discusses the role of AI in content creation and the need for human oversight.

AI's Impact on Communication and Collaboration

  • Dan highlights the role of AI in improving communication and collaboration, such as meeting summaries and transcriptions.

  • He shares his personal experience with AI note-taking tools like Otter.

  • Dan discusses the potential for AI to democratize access to information across different languages.

  • He emphasizes the importance of AI in managing the exponential growth of information.

AI's Future and Human Role

  • Dan reflects on the future of AI and its potential to enhance human capabilities.

  • He discusses the importance of critical thinking and human experiences in decision-making.

  • Dan emphasizes the need for humans to focus on problem-solving, discovery, and imagination.

  • He concludes with a call to leverage AI as a technology partner to create more time for creative and innovative work.

Additional UNADDRESSED QUESTIONS from the CHAT

In the Q&A of our talk we only got to a few of the questions, so I made an attempt here to address any of the additional questions from chat. All in my humble opinion. Many AI Marketers… and maybe a few technologists… disagree with me. But I’ve been studying these developments for a decade and a half now, and I’m fairly confident on most of these points.

What are your thoughts on Perplexity.ai?

Those that use Perplexity generally love it. But it’s important to notice when you are just using Perplexity is an interface on top of the ChatGPT and Claude Large Language models. Perplexity does sometimes leverage some of it’s own data, models, and algorithms (especially when searching the web in real time). But most users are generally using it simply as a better ChatGPT user experience… which is fine.

Rather than GenAI - how close are we to AGI? Do you believe that AGI will overshadow/outcompete GenAI in the foreseeable future? if so --- 5 yrs, 10 yrs, more?

There are a lot of different definitions of “AGI” in peoples’ heads, but for the sake of answering this question, I think General Intelligence means: “The AI can answer any possible question.” If that definition is used, then I passionately believe that AGI is forever impossible to achieve.

It’s impossible, because there will always be massive inadequacies in the data that will be available to train these algorithms. AI could think of trillions of possible recipes for dinner tonight, but it will never be able to answer the question, “What do you feel like having for dinner tonight?” Maybe that’s a trivial question, but it’s just one example of thousands of decisions that you make with context in your head that will never be available to algorithms.

We could solve that data availability problem, by making sure that everything, that every person, sees, or hears is captured and fed into the training data. But then we’re living in a surveilance state with the extinction of all privacy. Even that doesn’t capture the things that you “think”. So everyone could get neural links so that everyone’s thoughts are all also captured. Those things would make human-level intelligence a possibility. But there’s no way that’s an acceptable cost (people don’t even want browser cookies).

Because of this impossibility, Marketers and Technologists are gradually softening the defition of AGI to mean “The AI can answer a wide variety of questions at expert level.” That’s not actually General Intelligence… it’s just Pretty Good Intelligence.

What is the difference between the free and paid version of the AI engines? And is the paid worth the money?

Generally, this answer is changing all the time as each of these companies figure out what their monetization plan is. But currently, most paid versions give you access to a better model that is trained with more current data. At first, you might just be using your AI Assistant, like it’s Grammerly on Steroids to make your writing a little bit more concise or clear. While that’s the case, then maybe you don’t need the newest, best one, since the rules of grammar haven’t really changed that much. But if you’re using it to brainstorm and strategize in order to keep up with a constantly changing world, the new version is likely worth the money.

Also, the paid version is only about $9/month. It’s possibly the most powerful tool to ever be invented, it’s probably worth more than two cups of Starbucks Coffee.

So you don't believe AI can ever become truly independent and I know it sounds like Science fiction but you believe AI will never be self aware and be able to think uniquely.

Architecturally, AI is essentially just a big probability word database. If you ask it a “Yes?”/“No?” question, it will give you the most probable answer, because that’s what the training data suggested. That doesn’t mean it “Thinks the answer is Yes.” It’s just reporting the ‘learned’ answer to you.

That being said, people will build actions that use AI answers to make decisions and proceed without looking at what’s going on. And as I said in my talk, I think that’s a really bad idea.

Imagine a scientist that is trying to reduce CO₂ in the air and he connects AI to his CO₂ processing machine. He gives his AI the goal of minimizing CO₂ and has his machine run in a loop, constantly asking AI how to optimize for that goal and then making the adjustments in the machine. He stops looking at the machine, and goes to the beach. It’s possible for the machine to run out of control in a loop until ALL of the CO₂ has been removed from the air and all the trees die and then all the people with it. In that case, to be very clear. AI did not kill all the people. The scientist, who irresponsibly set his CO₂ removal machine up in an unmonitored loop is the guy who killed all the people.

AI’s ideas are just recommendations. Never set it up in an unmonitored loop without human decision making and review.

How will everyday AI use in our workplace affect which soft skills are important for us to succeed in our careers?

Problem Solving, Discovery, Decision Making, and Imagination are the most important skills.

What is your most contrarian opinion about AI and home IoT?

Maybe not contrarian amongst experts like you, but…

Even though vision sensors in home IoT will provide an amazing array of potential new capabilities, cameras in our homes is a difficult consumer adoption point to overcome. I think teams should avoid building products with vision as long as possible until the consumer trust is so strong that it’s clear that the privilege of vision won’t be abused.

How can I start preparing now to manage future direct reports who used AI like ChatGPT in school? Do you anticipate a values disconnect between mid-career millennials and Gen Alpha/Beta employees?

There’s lots to unpack in here but I’ll start with… New entrants in the workforce are definitely using AI assistants whether you like it, or realize it, or not. My biggest message for organizations is to recognize that if AI is locked down too tightly, then “Shadow-use” of AI is likely to happen (people using it on personal devices). Certainly there will always be values disconnects between multiple generations together in the workforce at the same time. There will even be clashes between Alpha and Beta. But diversity of perspectives is a good thing. Encourage those debates and discussions and go into them with an open mind and respect for varied opinions. Decide together as a team.

Language is about self-expression; words --- meaning --- concepts are not equitable among languages. What's the codex that determines the translation between languages? (eg. blitzkrieg) Is there any concern about a cultural flattening occurring as a consequence of lowering communication barriers globally?

First and foremost, I’m excited about inviting the additional 6 billion people into the Global Knowledge Ecosystem. So I am talking about enabling ways that they can participate. The codex that determines those translations is the Codex that’s already in place. AI, by design, can only replicate what it already sees in the world. So the codex that makes AI translations work is the one used to translate multi-lingual books, multi-lingual websites, and multi-lingual YouTube videos. When I discuss this with non-native English people, they’re excited, even if there are some improvements to be made.

What did you use for the language translation?

HeyGen and ElevenLabs. ElevenLabs trains on a short sample of my voice to automatically translate the sound of my voice into other languages. HeyGen does video. It learns the shape of my lips, face- and hand- gestures, and replicates those in new video.

Why are you so certain that AI will never be capable of taking our jobs?

For the same reason that I don’t believe that AGI is achievable. There is too much context in your head that isn’t in the training data. As long as your job involves some Problem Solving, Discovery, Decision Making, and Imagination; I think your job is safe. Processing jobs are at risk. Recommunication jobs are at risk. Basic Inspection jobs are at risk. However, even people doing those things are usually also doing some of the higher level Problem Solving in addition to those core lower-pyramid responsibilities.

Are there any AI tools for Product Managers specifically that are worth looking into?

Most of the tools you’re already using… Jira, Notion, Figma, whatever… are already working on building AI features into those products. So generally, I don’t think you need to change your core Product Software stack too much. You’ll just get AI from your current providers without doing anything special. You do probably want a Brainstorming, Strategization AI tool. For now, for that kind of help, I would go directly to the LLMs. I would play around with multiple Large Language Models (Claude, ChatGPT, CoPilot, or Gemini –or all four!) to see which of them work best for you.

Nice presentation - presented in a manner which is relative to all attendees. The part which is not addressed is how to go forward with collaboration, i.e. what is a good plan/direction to go forward based on the presentation. Topic Groups, Break-Out groups, ...;

  1. Talk to IT & Legal to figure out what privacy policies your team is comfortable with.

  2. Create corporate and team policies about what is allowed and what isn’t. Maybe right now, you want your team focused on Asking Questions and not uploading anything that might be proprietary.

  3. If you have an Asking Questions focused policy, then focus on finding ways to use AI as a brainstorming partner. Ask it for 25 ways you could change the Home Automation Industry and just use it as a helpful brainstormer.

  4. Figure out what AI note-taking app you are comfortable with. Most AI Note-takers privacy policies are pretty good about protecting the IP you’ve discussed in meetings. But take a look for yourself.

  5. If you get through those things, then reach back out and we’ll talk about what AI Adoption 2.0 should look like.

How does your statement that “hallucinations aren’t going away” and suggestion to “use human context” jive with ideas like summarizing content and real-time translations, which both imply trusting what’s generated without being able to verify it?

It doesn’t. Those summaries and translations WILL have errors in them. I deliberately left any transcription errors in the above AI Notes so you could see the current accuracy. It’s nearly perfect, but not exactly perfect. But my suspicion is that you don’t have a unified meetings notes repository at all right now and that you’re also not automatically translating all of your docs over to Hindi or Spanish. So having some notes and translations that are 99.9% accurate is likely an improvement from your current state.

Do you believe in the Singularity?

In Kurzweil’s book he simply described the Singularity as the time when computer calculations per second surpassed the number of calculations the human brain could perform. He said 2042 was when that would happen. I think there is certainly a time when calculations per second is achieved. Probably sooner than 2042. Calculations per second though, is only half of the equation. Collected and stored data necessary for making decisions with all of the applicable context is the other half of that equation and per the AGI thread above, I don’t think we ever achieve that state.

Brian Christian said in the Alignment Problem that AI has inherent gender biases in AI - do you think this will be an issue for HR roles?

Yes. When we say there are biases, we generally mean, “We wish the world was slightly different than it is… we wish the world was more fair… that opportunities were more distributed.” That’s not actually how the world is yet. The world has been filled with bias and inequality. We IMAGINE a world that is different than that training data. And that imagined future is the thing that helps us guide our decision making. That imagined future isn’t in the AI training data… and it won’t be. AI will give you recommendations based on the patterns that it’s observed in the post-Internet world. That will bias AI’s judgment and it’s recommendations and that problem doesn’t get fixed until the observable normalized equality in the world. That’s going to take a while.

It’s probably a really bad idea to use AI for Hiring or Performance Management applications. I’m not in HR Tech, so there probably are some safe applications, but I certainly wouldn’t just upload a bunch of resumes into ChatGPT to ask it who you should hire. That can’t possibly be a good idea given the flaws in examples in the world.

Otter.AI seems to be blocked by IT. Are we allowed to use it?

Talk to IT. If it’s blocked there is clearly an issue. Have a discussion about it. Work with the team to create a policy about in what non-IP situations it could maybe be used.

Have you heard of telling the AI 'to not Hallucinate' and that this can help with getting answers that are not incorrect or fake?

The problem here is “incorrect” and “fake” are not as binary as we would all like them to be. Instead, ask it for it’s best sources and then go to those sources to review the information that is there. You decide of it’s true and real enough. Those are tricky words.

Over the years we have used tools that utilized regular expressions to generate the contextual recognition in the data sets. How much of the core of these AI generators still use that technology as a basis?

These algorithms are fundamentally different from the ground up. You cannot query an LLM. There are hidden layers of a very complex neural network. However, you can add context through your prompting, and as a dialogue with multiple passes of review and context adding, the results can be even better.

Are there AI companion tools for content management systems, like sitecore?

Every software team is likely figuring out what their AI Feature set is or will be. So while there might not be one yet for your CMS, it will certainly start popping up soon.

What about nefarious applications for AI?  Should we be worried and what can we do to protect against bad AI?

Yes you should be cautious. Ask AI what your most vulnerable points might be. AI is good at helping you to find vulnerable points. The good news is, that Good AI will help to protect us from Bad AI (and the bad actors which are pushing that nefarious stuff into the world). The biggest mistake some organizations are making is blocking AI completely until the theorhetical dust settles. It might be too late then. Inaction is the deadliest poison.

Is there a specific AI to help do a first run through a legal document/contract to help reline issues?

I am not yet familiar with the landscape here, but there are likely many of these beginning to sprout up.

How quickly do you think companies will adopt AI? One major hurdle I’ve noticed is that legal departments often restrict or limit how these technologies can be used, either by outright denying their use or imposing significant limitations. Have you had any discussions about this issue?

Yes. For sure. And this happens with any new technology. Incumbent, leader organizations often have lots at risk and restrictions help to preserve safety and mitigate risk. If you’re competing against other large organizations, your adoption speed is probably similar and trust that making small, calculated, gradual, adoption adjustments is probably the right strategy (as it always has been).

However, when you find yourself loosing marketshare to the 15-person startup in an incubator, it might be time to tackle the change with bolder bets. Those small startups have nothing to lose and all of your customers to gain, so they’re going to start adopting any and everything they can get their hands on.

Adopt new technologies with the speed that your competitive landscape demands.

Can the AI rebel against itself?

AI can’t really actually “do” anything – it’s just a knowledge repository. However, as discussed above, a person could build some kind of agent that gets stuck in an endless loop. The best practice in AI use is to use it in short controlled increments as an assistant. Evaluate all of it’s recommendations with a grain of salt.

If AI is not trained with our inputs, how is it trained?

These algorithms are not open sourced so we don’t actually get full detail on what data sets they’ve used to learn. However, it’s fairly clear that webpages, social media posts, and videos with public visibility are in that corpus. Anything that was posted to the internet with public visibility is potentially included. But each algorithm is slightly different and again there has been no declaration about sources made by the Gemini, Claude, or ChatGPT teams.

Thank you!