The Rise of Generative AI: A Double-Edged Sword for Investors

Computer Science Published: August 24, 2023
BACEFAQUAL

The rapid evolution and rise of generative artificial intelligence (AI) systems are reshaping industries and human creativity. While generative AI offers novel opportunities, it can also amplify a range of existing and emerging harms for individuals and society.

Generative AI uses machine learning to generate new code, text, images, audio, video, and multimodal simulations. It works by using large artificial neural networks built with enormous datasets and parameters that are inspired by synapses within the human brain. The difference between generative AI and other forms of AI is that its models can create new outputs, instead of just making predictions and classifications like other machine learning systems.

Some examples of generative AI applications include text-based chatbots, image or video generators, and voice generators. While these technologies have many potential benefits, such as improving content moderation and providing support for children who experience abuse, they also pose significant risks. For instance, misusing AI to generate child sexual exploitation and abuse (CSEA) material that looks like it involves real children is a pressing concern.

The Generative AI Lifecycle: Identifying Risks and Opportunities

To mitigate the risks associated with generative AI, it's essential to consider online safety risks and harms from the earliest stages of developing a generative AI technology. This should continue throughout the technology's lifecycle and across the entire system from developing a business case to releasing, disseminating, and reintegrating AI-generated content.

The simplified product lifecycle for generative AI consists of 10 crucial steps where online safety risks and harms must be considered. These include:

1. Business case: Evaluating the business case for developing the technology and exploring options for funding. 2. Selecting data: Making choices about the type of model to create and the input data used to build and train it. 3. Training the model: Using machine learning algorithms to generate new outputs based on the selected data. 4. Refinement: Continuously refining model data throughout the lifecycle to minimize risks, harms, and bias.

Regulatory Challenges and Approaches

As countries think about how to regulate generative AI, technology companies have been advocating for certain regulatory approaches that may serve their commercial interests. In Australia, the Government is examining the risks, benefits, and potential impacts of generative AI through various departments and forums.

To ensure online harm is prevented and addressed, multiple actors, including technology developers, downstream services, and users, must work together. This includes incorporating safety measures at every stage of the product lifecycle, consulting stakeholders from multiple sectors, and collaborating with the user community.

Safety by Design: A Crucial Component in Mitigating Risks

Safety by Design is built on three core principles: Service provider responsibility, User empowerment and autonomy, and Transparency and accountability. Technology companies can uphold these principles by making sure they incorporate safety measures at every stage of the product lifecycle.

This includes documenting capabilities through model cards or system cards, which explain how systems and models operate. Additionally, consulting experts who can provide guidance on inputs for training the model is essential to address culturally specific and contextual forms of harm.

Portfolio Implications: A Conservative, Moderate, and Aggressive Approach

Generative AI's impact on portfolios will depend on various factors, including the type of assets held and the investment strategy. Conservative investors may choose to avoid generative AI altogether due to its potential risks, while moderate investors may consider incorporating it into their portfolio as a way to improve content moderation.

Aggressive investors, however, may see generative AI as an opportunity to gain a competitive edge in industries such as healthcare and finance.

Implementation Considerations: Timing and Entry/Exit Strategies

When implementing generative AI in portfolios, timing is crucial. Investors should consider the potential risks and benefits associated with each technology and industry before investing.

Entry strategies can include incorporating generative AI into existing portfolios through diversification or using it to improve content moderation. Exit strategies may involve reducing exposure to certain technologies or industries that pose significant risks.

A Call to Action: Synthesizing Key Insights

Generative AI is a double-edged sword for investors, offering both novel opportunities and significant risks. By considering online safety risks and harms from the earliest stages of developing generative AI technology and incorporating safety measures at every stage of the product lifecycle, we can mitigate its potential downsides.

Investors should approach generative AI with caution, carefully weighing the potential benefits against the potential risks. By doing so, they can make informed decisions that align with their investment goals and risk tolerance.