Our Personal AI Assistants Will Soon be Our Interfaces to the World


Something I captured back in 2016 is that much of the marketing that’s now done on you will soon be done against your personal AI assistant.

Your AI will have your preferences, and it will constantly adjust those based on continued interactions over time. And at that point it will be your primary filter for reality.

This will all be determined by your AI. And your AI will be interacting with thousands of APIs that represent reality so that it can act on your behalf.

  • new news stories

  • new analysis of that news

  • all the new books coming out

  • the lists of new music

  • new artists

  • new releases from existing people you follow

  • the services from every business near you

  • the personal daemons of people near you (think ai-ai wingpeople)

You will be served a curated list of these things by your AI.

So the way to influence you will be to get seen by your AI, and to convince your AI that you want to see THIS thing. Influence campaigns (marketing, propaganda, education) will be human directed but AI generated and managed. And it’ll be continuous.

The main interface to reality will be you, me—everyone—all talking to our AIs and telling it what we want to do, and then our AIs using a thousand different tools and APIs and services on the backend to make that happen.

There, of course, will be direct human engagement, but even then, each of our AIs will be there monitoring, validating, summarizing, integrating, and otherwise enriching those experiences.

But most of our time is alone. With our thoughts. Our memories. Bouncing ideas. Contemplating. Iterating. Perhaps preparing to release to a greater audience. And our DAs will be the primary filter for all of that.

Your AI will Augment you towards any and all of that that you’re trying to accomplish. Continuously and in both directions. Both in curating your inputs but also in sharpening your output to make you more effective and successful.

Our AIs will be the filter layers between ourselves and reality.

And the security issues there—with compromises, manipulation, and basically any integrity issues with our AI’s will be extraordinary. Because the extent to which you control someone’s personal AI will largely be the level of control you will have over them as well.



Source link