Generative artificial intelligence (AI) has become a hot topic, with ChatGPT reaching one million users in just five days, surpassing the adoption rates of other major platforms like Twitter, Facebook, Spotify, and Instagram. This surge in interest has led to a multitude of questions for businesses.
A recent webinar hosted by Gartner, titled “Beyond the Hype: Enterprise Impact of ChatGPT and Generative AI,” aimed to address these concerns and explore the potential of AI technology for organizations.
The webinar, hosted by Scott L. Smith, featured a distinguished panel of Gartner analysts:
- Frances Karamouzis – Distinguished VP analyst at Gartner, specializing in AI, hyperautomation, and intelligent automation. She focuses on research related to strategy, creating value, use cases, business cases, and disruptive trends.
- Bern Elliot – Vice president and distinguished analyst at Gartner research. Currently, his research mainly looks at AI, especially natural language processing (NLP), machine translation, and customer engagement and service.
- Erick Brethenoux – The chief of research for AI at Gartner. He focuses on AI techniques, decision intelligence, and applied cognitive computing. Brethenoux helps organizations with the strategic, organizational, and technological aspects of using AI to grow.
The discussion explored the many applications of generative AI across various industries. From generating creative text formats to producing audio, images, and even 3D designs, the technology promises to revolutionize how businesses approach content creation and innovation.
Furthermore, analysts emphasized the potential for AI tools to drive growth and cost savings, but also acknowledged concerns around ethics and potential job market disruption.
Deeper look at generative AI capabilities and considerations
Erick Brethenoux, a Gartner analyst, explained that generative AI uses a massive amount of data to learn and then create entirely new and original artifacts, which can encompass various forms of creative content.
Brethenoux clarified the relationship between different terms: generative AI and large language models (LLMs). As he explained, generative AI is the overarching discipline, while LLMs are a specific type built on vast amounts of text data. ChatGPT, the popular application, sits on top of an LLM, allowing users to interact with it.
Generative AI can produce programming code, synthetic data, and even 3D models for use in computer-aided design systems. It can also be used to develop entirely new game strategies and even generate rules through inference.
One example Brethenoux highlighted is a system that generates unforeseen strategies in a two-sided obstacle game. As the system runs, it discovers innovative ways to overcome obstacles, which translates to real-world applications like uncovering new supply chain routes or customer outreach methods.
To help navigate this vast potential, Brethenoux introduced the concept of a “use case prism.” This framework considers both business needs and feasibility when evaluating potential applications of generative AI. Media content improvement and code generation are examples of high-value, readily achievable use cases.
As for the benefits related to generative AI, versatility is a key benefit, with the ability to generate diverse content formats from a single model. Accessibility is another advantage, thanks to platforms like ChatGPT that make the technology readily available. Additionally, generative AI offers the potential for lower entry costs, allowing for experimentation at minimal initial investment.
Nevertheless, there are also risks to consider. Domain adaptation, the process of tailoring models to specific needs, requires ongoing maintenance to ensure compatibility with evolving base models. Copyright issues and potential biases in the generated content are other concerns. The concentration of power within a limited number of companies due to the immense resources required to develop these models is also a consideration.
The potential for misuse and generation of harmful content necessitates careful validation and verification of any outputs from these systems. Finally, the opacity of such large machine learning models, often referred to as “black boxes,” makes it challenging to explain their reasoning behind the generated content.
Unveiling the power of Chat GPT
Bern Elliot, another Gartner expert, explained the inner workings of ChatGPT, addressing some of the challenges faced by enterprises and offering practical use cases.
ChatGPT, as Bern explained, is a software application with two key parts: a conversational interface and a LLM component. The conversational part refines user input before submitting it to the underlying LLM. This LLM, in the case of ChatGPT, is a heavily curated version called GPT-3.5.
There are two main versions of ChatGPT available: the original one from OpenAI and another offered by Microsoft through its Azure OpenAI services. While both leverage the same core algorithm, they have diverged in terms of input/output filtering, operations, and the underlying model itself. Bern emphasizes that Gartner has more confidence in Microsoft’s ability to deliver a secure and compliant cloud-based service.
When it comes to using ChatGPT, there are two primary approaches: out-of-the-box and custom models. The out-of-the-box model offers a user-friendly interface but provides limited customization and control. Conversely, custom models require significant investment and expertise but allow for greater personalization and potentially lower costs.
An interesting concept introduced by Bern is prompt engineering. Since directly modifying these large models is challenging, prompt engineering focuses on crafting the input (prompts) to achieve the desired outputs. By supplying the right information and structuring it effectively, users can steer the LLM towards more relevant and accurate results. This approach is relatively inexpensive and can be gradually improved over time. Prompts are also crucial for integrating ChatGPT with business systems, as they enable the inclusion of business data for more specific outputs.
Bern then showcased a compelling use case that combines a large language model, a chatbot, and a search function. In this scenario, a user submits a search-like request through an interface. The application retrieves relevant information, processes it using Natural Language Processing (NLP), and feeds it to the LLM along with a specific task, such as summarizing retrieved documents. The LLM then condenses the information into a user-friendly format, while maintaining traceability to the original source. This exemplifies the power of LLMs in generating content specific to an organization’s internal data or search results.
What does Gardner think of ChatGPT-4
Gartner analysts have weighed in on the recent announcement of ChatGPT-4, offering a measured yet optimistic outlook. While acknowledging the technology’s early stage, they identified several intriguing capabilities.
One key feature is the ability to process both text and images, potentially leading to innovative applications that go beyond this basic combination. Company also recognized improvements in handling multiple languages, signifying a broader reach for the technology.
The ability to guide the AI through prompts, known as “steerability,” is seen as a major benefit. Gartner thinks this feature is essential for making the most of generative AI models.
While remaining cautious about claims of reduced factual errors, they acknowledged the potential of improved creative text generation.
Importantly, Gartner emphasized that the true value of ChatGPT-4 lies in its ability to handle complex tasks. For simpler uses, the difference between this version and its predecessor, ChatGPT-3.5, might not be noticeable.
Overall, Gartner sees promise in ChatGPT-4 but highlights the need for further exploration and real-world testing before reaching definitive conclusions.
Generative AI vendor landscape
Analysts further provided a summary overview of the generative AI vendor landscape.This overview, while just a brief look at the extensive market, aims to provide a framework for understanding the various participants. Therefore, Erick divided the vendors into three main categories:
- Applications – These vendors leverage existing LLMs and foundational models to create specific functionalities. They offer “canned capabilities” such as pre-built content creation tools, prompt engineering solutions, and even industry-specific applications like drug discovery in biotech. These applications can also integrate with productivity tools to enhance workforce productivity. Knowledge management is another area where application vendors are utilizing LLMs to improve information accessibility and reuse within organizations.
- Proprietary foundation models – This category encompasses the companies that develop the core LLMs, the building blocks of generative AI. Familiar names like OpenAI and Microsoft are included here, alongside a growing number of companies from China. The concern, as Erick highlighted, is the potential for a limited number of companies controlling this foundational technology.
- Open-source models – Here, organizations leverage openly available models like Hugging Face’s Transformers or Meta AI’s BlenderBot to develop applications. Even established data science platforms like Databricks are exploring this approach. This broadens access to generative AI capabilities, as companies can build upon these models without needing to develop their own from scratch.
As Erick concludes, businesses seeking to leverage generative AI should carefully consider how these components will fit together within their existing systems.
Integrating these technologies often requires significant software engineering effort to ensure seamless operation. In that case, understanding the vendor landscape and the challenges of integration is crucial for organizations looking to capitalize on the potential of generative AI.
Future enterprise trajectories with AI technology
In the following part of the webinar, the analysts were tasked with discussing what they believe are some important future directions and where Gartner sees the trajectories for enterprises. In response to the topic, each of them offered distinct perspectives and provided compelling explanations.
Bern Elliot’s prediction
Bern highlighted how Gartner’s predictions about the impact of generative AI, published in a report over a year ago, are proving remarkably accurate.
He focuses on three key future directions:
- AI-augmented development and testing – By 2025, Gartner predicts that 30% of enterprises would leverage AI-assisted development and testing strategies, compared to a mere 5% at the time. This trend, according to Bern, is accelerating even faster than anticipated.
- Generative design for websites and apps – According to the firm, by 2026, 60% of design efforts for new websites and mobile apps would be automated by generative design AI. This is due to the prevalence of both text and image content in these applications, making them ideal for AI-powered design.
- The rise of the design strategist – By 2026, Gartner predicts that a new role, the “design strategist,” will emerge, combining the skills of designers and developers. This role is expected to lead 50% of digital product creation teams. Bern suggested that generative AI tools will empower these individuals by blurring the lines between development and design. Shift left strategies, where implementation begins alongside the design process, will become more common, leading to a more dynamic and interactive workflow.
Erick Brethenoux’s prediction
Erick added an interesting opposing view to Bern’s optimistic vision. He essentially argues that the true value of generative AI lies not just in content creation but in its ability to inform and guide decision-making processes.
Here are his key takeaways:
- The rise of the software grease monkeys – Erick playfully predicts the “revenge of the software grease monkeys.” While acknowledging his own background in AI expertise, he emphasized the critical role of software engineers in operationalizing AI systems. According to Erick, getting these systems to deliver real business value has been the biggest challenge, and software engineers will be instrumental in bridging the gap between upstream design and downstream impact.
- The explosion of adaptive models – Erick highlighted the crucial role of adaptive models. Unlike the one-size-fits-all approach, these models can be customized to specific business problems and organizational content. He foresees a surge in vendors and even enterprises developing these models to personalize the value proposition of generative AI. This will involve a combination of machine learning, systems optimization, and knowledge graphs, forming what Gartner calls “composite AI.”
- Decision intelligence superseding generative AI hype – In a potentially even more provocative statement, Erick suggests that by 2024, decision intelligence will surpass the hype surrounding generative AI. His reasoning is that while generative AI excels at content creation, the ultimate goal is to use that content to make informed decisions.
Frances Karamouzis’s outlook
Frances directed the conversation towards the human element within enterprises navigating generative AI.
Here are her key points:
- Shift from code to data – Frances highlighted a client quote, stating: “1% of code is delivering 80% of net new value.” This implies a move from prioritizing lines of code to focusing on the data that fuels generative AI models. Interestingly, efficient code becomes even more valuable in this data-driven environment.
- Collaboration with robo-colleagues – By 2026, Gartner predicts that over 100 million people will collaborate with virtual AI colleagues in their daily work. This means a future where humans and AI work side-by-side, with AI increasing human decision-making through data analysis.
- Prompt Engineering – Frances explored a future job title, “prompt engineer.” As she explained, these specialists will be highly skilled in crafting effective prompts to optimize generative AI models.
- Fusion Teams – Finally, Frances emphasized the importance of “fusion teams” – a concept Gartner introduced earlier. These teams bring together “citizens” (business users), “professionals” (AI and software engineering specialists), and “business technologists” (those bridging the gap between business and technology). Figuring out how to effectively combine these roles will be a key challenge for enterprises seeking to maximize the value of generative AI.
How can enterprises secure its company intellectual property
Integrating generative AI systems is challenging for protecting intellectual property (IP). While employees might accidentally use copyrighted or confidential information, blocking access entirely would limit innovation. According to Gartner, the solution lies in a balanced, multi-layered approach.
First, strong leadership policies are essential. These policies should clearly define acceptable data for AI input, focusing on protecting sensitive information like personally identifiable information (PII). Employee training on responsible AI use and data security supports these rules by creating a workforce that understands the importance of IP protection.
Second, checking vendors carefully is also very important. Just like any external provider dealing with sensitive data, cloud-based AI vendors should be thoroughly evaluated. This includes reviewing their security practices and confirming their data protection measures to reduce the risk of leaks.
Last but not least, educating employees to recognize and avoid using sensitive information with AI helps create a culture of responsible data use. By understanding the legal and ethical aspects of data handling, employees can actively help protect the company’s IP.
Merging internal and external data for generative AI
Combining private company data with publicly available information is complicated but important for using generative AI well. Fortunately, there are emerging solutions to address this challenge on an ongoing basis.
Gartner is currently developing a resource outlining “design patterns” for such use cases. These patterns will explore various methods for combining your organization’s existing data with large language models. One promising approach involves “freezing” a pre-trained LLM and building an “adaptive model” on top of it.
This adaptive model serves a dual purpose. First, it allows you to leverage the LLM’s capabilities for tasks like question answering. Second, it incorporates your secure, internal data while maintaining strict control over what information feeds back into the LLM.
Several techniques can be employed to ensure data security. Rule-based systems can filter inputs to the LLM, preventing sensitive information from leaking. Additionally, creating a dialogue between the LLM and the adaptive model allows for human validation of the data flowing through the system.
This approach is similar to what is already done in machine learning, where there are roles like “machine learning validators.” These validators check the data at every stage to make sure it is suitable for its intended use.
Mitigating UX risks in B2C applications of LLMs
Gartner identified potential UX risks associated with business-to-consumer (B2C) applications of large language models. These risks arise when users interact with conversational interfaces powered by LLMs in the background.
To mitigate risks, the company recommends several strategies. First, transparently informing users that they’re interacting with an AI is crucial. This prepares them for potentially unexpected responses. Second, restricting user prompts and the data fed into the LLM can help control the outputs.
Furthermore, mentioning where information comes from builds trust and empowers users to evaluate the information’s credibility. Gartner gave an example of a LLM summarizing customer service articles for a user and clearly stating the source of each article.
It’s important to acknowledge that many B2C LLM applications are currently agent-facing. In this scenario, an LLM generates responses that are reviewed and potentially rephrased by a human agent before reaching the customer. This approach offers a safeguard against inaccurate or misleading information, particularly during the early stages of LLM development.
Collaboration is key
Gartner highlighted the long history of cooperation between big companies and startups in AI, which is expected to continue with generative AI.
Usually, big companies provide interesting problems for startups to work on and test. However, handling IP is key. Here, Gartner offered some do’s and don’ts.
Big companies should not be too strict with startups about IP. While keeping a competitive edge is important, sharing some IP helps grow the market.
For that purpose,Gartner shared an example of Stora Enso, a Scandinavian manufacturing company. Stora Enso openly shared their problems and invited startups to help find solutions. This openness led to a lot of new product development for the company.
Although using internal data and managing IP rights needs careful thought, working together offers great potential for generative AI innovation. Gartner’s focus on “adaptive models” highlights the value of this collaborative approach.
The full webinar recording is available for viewing on the Gartner website:
https://www.gartner.com/en/webinar/464445/1096048