I’m going to try something different this week, and try to move this newsletter more in the direction of a longer analysis, and move away from a series of tips and tricks. If you have any thoughts you’d like to share about the format, please leave a comment or send me a message. Also, I have a poll on AI productivity down below. You know what to do.
What About Apple?
We’re still in Phase 1 of the AI Arms Race
As recently as a year ago, I was still sharing resources with my students about how to get hired at a FAANG company. This was before the wave of tech layoffs, and for those of you who have forgotten, FAANG stands for Facebook, Apple, Amazon, Netflix and Google. Now it seems that there are a whole host of newcomers vying for a role at the top of the new world order. Depending on who you talk to, it looks something like this: OpenAI, Microsoft, Meta (formerly Facebook), Google (Late to the Party), Amazon+Antrohropic, and the Chinese trio of Baidu, Alibaba, and Tencent.
While Apple has plenty of AI powering it’s products, not much of it is consumer facing. If all the other previously mentioned AI companies are already in an AI arms race and one-upping each other every month with new announcements, what happens when Apple joins the fray? And Google, probably still stinging from it’s humiliating Bard experience is certainly going to aim high when it releases Project Gemini by the end of the year. Recent reports suggest that Apple is on a hiring spree and is planning on a major spending spree in 2024. In July, there was speculation that Apple was going to spend $1 Billion on servers for it’s AI endeavors, but the current estimate is closer to $5 billion. While that may sound like a lot on money, it still falls behind other companies like Meta and Microsoft. Apple analyst Ming-Chi Kuo thinks that Apple needs to dramatically increase it’s spending if it wants to remain relevant in the AI Game. Apple clearly has the cash to do whatever it wants and the experience to make desirable products, but what can it to catch up with the the others?
As Google rolls out Project Gemini in the next few months, it’s expected to be much more conversational and powerful than our current models. Since Apple usually gets it right from a User Experience perspective and we can assume they’re watching Google closely, I’m predicting that we’ll see a next-level AI product from Apple in early 2024. While Dalle-3 was not a Midjourney killer in terms of image quality, it has opened up a whole new chapter of ‘natural language prompting’ that continues to erode the need for specialized ‘prompt engineers’. I expect that Apple will introduce some sort of AI tool or OS modification that is incredibly easy to use.
Why it matters: Apple is really late to the party. Adding a Keynote assistant or making Siri more powerful isn’t going to impress anyone. It needs a huge game-changing hit.
Time Magazine’s Top 200 Best Inventions for ‘23
I haven’t finished reading the whole thing yet, but it should come as no surprise that this year there is an AI Section. It includes several things that have been mentioned in this newsletter, but several that were new to me. Make sure you check out Stable Audio, Authentic AI, and Stopping Wildfires. And then check out the entire section devoted to Accessibility If you were one of my students in the time frame from 2019-21, you probably remember the “Smart Cane” assignment that we did as an in-class sprint. I’m glad to see that someone finally made it a reality.
Make sure you check out the innovative concepts in Apps and Software, AR+VR…Design, Sustainability and a dozen other categories. Who helped make these products great? Designers, that’s who! So if you’re feeling bored or stuck in your job and you need a little inspiration, go check it out. And, there were 5 Seattle area companies that made the list according to Geekwire.
Why it matters: A solid understanding of Human Centered Design along with the the power of AI can allow us to create products that can change the world.
The Cult of Productivity Will Save Us All?
You’ve probably seen some sort of statistic like these, it’s OK to skip over them.
Over 40% of companies surveyed by Deloitte plan to adopt some form of generative AI within the next year. Adoption is happening quickly across sectors.
Generative AI could automate up to 45% of the activities people are currently paid to do according to a McKinsey study. This could significantly disrupt the job market and workforce.
By 2025, IDC predicts over 50% of enterprises will leverage generative AI to augment human skills and automate tasks. This will change how businesses operate.
The market for generative AI is projected to grow from $4.9 billion in 2022 to over $142 billion by 2030 according to Reports and Data. Exponential growth is expected.
Every article I read points to some sort of Holy Grail of increased productivity. Either it’s a software product promising that you’ll do your job 10x faster, or a consulting firm forecasting that by 2025 X% of jobs will disappear because of our increased productivity. Or it’s a billionaire optimist like Sam Altman or Marc Andreessen talking about some rose-colored future where we all work only 3.5 days a week. However, the internet has brought us 20+ years of innovation and are we really any more productive? Wasn’t email supposed to save us? And then Jira and Trello and Asana? And is SLACK making you more productive? Sure, ChatGPT can write that rough draft for me and there are a dozen tools that can speed up the production of my slide deck, but are any of these innovations really going to improve the quality of my work life? I’ve created a poll (below). Please let me know how you really feel.
Why it matters:
A Growing Call For Ethics and Accountability for AI
I read a lot of AI articles every week and I’m seeing a growing call for ethics, accountability, and managing risk. Nobody seems to be raising the alarm about Skynet and the Terminator destroying humanity, but they are trying to remind us of the risks of Big tech run amok. Consider this:
Demis Hasabis, The CEO of Google’s DeepMind says,
“We must take the risks of AI as seriously as other major global challenges, like climate change, It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI.” Check it out
A group of concerned scientists call for a scientific oversight to test and certify generative artificial intelligence, before the technology damages science and public trust. Check it out
AI pioneers Yoshua Bengio and Geoffrey Hinton, two of the so-called AI godfathers, have joined with 22 other leading AI academics and experts and suggest that companies and governments should devote a third of their AI research and development budgets to AI safety, and also stressed urgency in pursuing specific research breakthroughs to bolster AI safety efforts. Check it out
Steve Case, the original Founder of AOL, says that the US government needs to regulate AI, to ensure public trust and distribute economic opportunity. Check it out
Anthropic, Google, Microsoft and OpenAI have announced Executive Director of the Frontier Model Forum and over $10 million for a new AI Safety Fund that is supposed to provide some ethical guidelines, but is dwarfed by the billions being spent in development of AI systems. Check it out
Max Tegmark, a professor of physics and AI researcher at the Massachusetts Institute of Technology says, “We’re witnessing a race to the bottom that must be stopped, We urgently need AI safety standards…regulation is critical to safe innovation, so that a handful of AI corporations don’t jeopardize our shared future.” Check it out
Fortunately, Open AI is forming a team and offering a $25,000 grant to help assess potential ‘catastrophic risks”. Check it out
Why it matters: Remember Crypto? Remember the Banks? Remember the Airlines? Remember ‘Too Big Too Fail’? Whose going to come to the rescue if everything falls apart?
Trending: Poison Pixels?
Thanks to regular reader Marie Z. who was the first one to share this hot tip, along with several others. Ben Zhao, a professor at the University of Chicago, has created a tool called Nightshade that can poison the training data when scanned by generative AI tools. Theoretically, it contains, invisible data at the pixel level that can corrupt the output of tools like Midjourney and Dalle-3. The MIT Technology Review has a full report.
Poisoned data samples can manipulate models into learning, for example, that images of hats are cakes, and images of handbags are toasters. The poisoned data is very difficult to remove, as it requires tech companies to painstakingly find and delete each corrupted sample.
Why it matters: If enough people use it and it actually damages their models, perhaps the big AI companies will think twice before scraping everyone’s data.
Ending On a Positive Note
So this video is currently speculative AE wizardry, and NOT what your Adobe-Figma workflow looks like. It’s a teaser of what things might look like, but if they get half these things right, it will be pretty cool!
That’s a Wrap! Wow, I knocked out three issues of the newsletter this week. Thanks for all your support, keep those suggestions coming, and keep learning!
Very much enjoy this blog and content. In response to the email prompt about longer term predictions vs. how-to's, one of the most useful elements I get from the blog is seeing what's possible, and what I can offer as possible to colleagues. So hope to retain a balance of recent breakthroughs with longer-term predictions, which are also compelling. Thanks for doing this!