An “AI Race” ongoing, and the big technology companies are fighting for their market share of tomorrow’s tech tools market. Following the massive exposure of ChatGPT, the generative AI solution of OpenAI, several big announcements were made mid-March 2023.
Tech giants line up for an AI race
On March 14th, Google announced “a new era for AI and Google Workspace”, offering AI based productivity tools that would allow users to generate (through AI-based software) responses to emails, summaries of email threads, take notes in video meetings, and create presentations based on textual documents. It comes as no surprise that these capabilities shall be integrated in Google Workspace, which is Google’s competitor for Microsoft’s 365/Office suite. For the time being though, these tools will only be available to “trusted testers”, i.e. the technology is still being developed, it is not yet ready to be used by you and me. So why announce it already? Most probably, in order to create buzz and demonstrate that Google has a strong position in the AI Race.
On the same day, OpenAI announced GPT-4, its next-generation large language model, or in simple words, the next version of ChatGPT. After ChatPT had already been praised for being far superior to other technologies, GPT-4, should be even significantly better. But OpenAI didn’t reveal insights about why and how, which may be due to the fact that they want to fully monetize their “first mover advantage”. But one cannot rule out that the main goal of the new release was to create yet more buzz.
Only two days later, Microsoft made a similar announcement, introducing Microsoft 365 Copilot, which will offer similar capabilities and will be embedded in Microsoft 365 apps such as Word, Excel, PowerPoint, Outlook, Teams; tools that are already very broadly used by millions of business users. Early this year, Microsoft already took a multi-billion investment in OpenAI, and has already incorporated ChatGPT in its Azure OpenAI Service.
What is Generative AI?
All these tools use Generative AI. You probably already know that AI stands for Artificial Intelligence. But what is Generative AI? In essence, Generative AI is a software that generates content (text, images, or audio) in such a way that the generated content seems realistic and creative, as if an intelligent human had created it. The software is able to generate this content because it (i.e. the algorithm) has been trained using large amounts of training data.
This training data is a black box: no supplier will give you insights into their training data. The problem with this black box approach is that data may be biased. To understand what data bias means, consider that you do a study on what is the percentage of blond people in the population, and you take a sample dataset of many people, let’s say even 1 billion people. You’d expect that with such a big dataset, you can easily deduct realistic results about the percentage of blond people in the population. But if your dataset includes only people from Africa, where blond is a rare exception, it won’t be representative. If you subsequently use the insights that you learned from this dataset when you consider Swedes, Germans or Dutch, you will quickly generate incorrect, biased, and even hurtful content, simply because your underlying dataset was biased: it only included Africans. It was biased towards Africa. This is just a simple example. Generative AI makes needs much more impactful decisions, e.g. on what is right and wrong (for example, when the software generates an article about pros and cons), about what is considered “normal”, about how people express their opinions etc. Therefore it comes as no surprise that Chat-4 (and any of its competing products) can still generate biased, false, and hateful text. A the technology further matures, I expect that these issues will be mitigated better, partially by technology advancements, and partially by government regulation which will force tech companies to build some guards into their tools and into the process of developing these tools, including regulations about training data. As part of its digital strategy work, the European Union has already been working on Ethics guidelines for trustworthy AI.
Will generative AI revolutionize how we work?
Will generative AI revolutionize how we work? In short, my answer is YES. But it will be a gradual process, as the technology matures and moves from lab-conditions towards mass adoption.
Both Google and Microsoft were quick to push generative AI solutions into their business productivity tools: Google Workspace and Microsoft 365/Office. The reason for that is simple: almost everyone uses these tools. Embedding generative AI in these tools is the fastest way to reach the consumer; to reach practically any consumer. These tools are the key for mass adoption.
Have you ever sat down for a long time to write an email, contemplating on how to word your message carefully? Have you ever spent hours preparing a presentation on material that is available in text format (e.g. a book, product brochures etc)? Have you ever spent much time figuring out which Excel formulas to use, and how, to create the pivot tables that answer your questions best? Imagine that all this time can be gained by letting an AI-based tool perform these tasks for you, such that you just need to review the results, rather than create them. Once the tools are trained and mature enough, such that you can trust the content that they generate, they will be able to save a lot of your time, freeing up time to focus on tasks that require either more intelligent skills, or interpersonal skills, or… just go meditation, get tanned on the beach, read a good book, or do anything you enjoy doing.
These tools have the potential to save a lot of your time, and then it’s up to you what you want to do with this extra time. What would you do with a couple of hours per day that you manage to free thanks to such tools?
Of course, this is not the complete story. First, because companies know that generative AI tools will improve productivity, and will demand that their staff become more productive. Thus your employer will create new tasks for you to do in the time that you’ve saved thanks to generative AI tools, rather than these tools tampering the race for employee productivity in the corporate world. Second, because not everyone will be able to adopt such tools with the same proficiency. As individuals we are responsible for our own actions, but as society, we also have the responsibility to think about those who are less capable. Third, and very importantly, I think the question marks concerning ethical AI are of great importance. In order to use generative AI as productivity enhancing tools, we need to trust these tools. If these tools were developed using training data that we have no knowledge about, how can we trust them? I mentioned earlier that I expect government regulation of this domain. But I think that also the business community can stand up for the challenge, and define international standards for ethical and trustworthy AI, similar to ISO standards for IT security, for food safety and more.
Last but not least, are you wondering why I have not mentioned Facebook (Meta) in this article? Based on Mark Zuckerberg’s recent announcements, I think he’s still trying to figure out what the next big game is: Metaverse? Generative AI? By the time Meta figures it out, it may find out that others have already won this race…
Suggested reading and watching:
- What are the pros and cons of the ChatGPT chatbot?
- Introducing Microsoft 365 Copilot – your copilot for work; by Jared Spataro, Corporate Vice President, Modern Work & Business Applications, Microsoft, March 16th, 2023
- Introducing Microsoft 365 Copilot | Your Copilot for Work, YouTube video, by Microsoft 365.
- A new era for AI and Google Workspace, Google product announcement, March 14th, 2023.
- What Is Generative AI? Boston Consulting Group