The New Imperative of Responsible AI

08.15.2025 | Share:

For decades, engineers have explored artificial intelligence (AI) to give machines more human-like capabilities. The past 10 years have brought significant progress, from smart assistants like Siri and Alexa understanding our commands and managing simple tasks to driverless cars offering an alternative to taxis in major cities. More recently, generative artificial intelligence (GenAI) has evolved at a breathtaking pace, creating new possibilities for individuals and businesses alike.

Artificial intelligence and its capabilities are fueled by large datasets that can include images, text, audio and numerical data, much of it gathered from the internet, along with the data companies collect about their customers. That’s why ethical technology governance to manage AI responsibly should matter to everyone.

Avoiding Unintended Consequences

How does the use and development of AI affect the people on which it’s being used? As companies pursue opportunities to make their businesses better and stronger with AI, they should also define initiatives to address the risk that misinformation or bias could create unintended consequences for customers. These consequences are often not a result of malicious intent, but rather a byproduct of how AI systems are designed, trained and deployed. Without initiatives to oversee the approach, companies may be exposing themselves to significant financial and reputational damage that could erode shareholder value.

Without initiatives to oversee the approach, companies may be exposing themselves to significant financial and reputational damage that could erode shareholder value.

Here are some of the ways a company’s use of AI can have unintended consequences:

  • Imperfect outcomes. While AI has progressed significantly, saving time and creating greater efficiency and precision with some tasks, it’s widely acknowledged that AI can make mistakes. Errors may not be immediately apparent due to the complexity of the underlying algorithms and models can be used to create and disseminate misinformation. Companies should continue to uphold high standards for the integrity, quality and safety of the products and services they are selling. Health insurers, for example, may face lawsuits for allegedly using algorithms to improperly deny care.
  • Biases. Any direct or indirect discriminatory biases contained in the underlying datasets could lead to outcomes that favor certain groups over others. For example, AI-powered recruitment tools that screen resumes may favor certain characteristics and limit access to the widest and best talent pool for the job.  In addition, some companies have paid substantial settlements to resolve allegations of discriminatory practices in areas such as advertising and evaluating medical claims.
  • Privacy and security. Companies using AI for customer analytics, employee monitoring or predictive modeling may gather and analyze personal data in ways that violate privacy rights. For example, a company may collect data for one stated purpose, such as improving a product, but then use it to train an AI model for an entirely different purpose, which violates the core tenet of many privacy laws. Many AI models are also trained on data that was “scraped” from the public internet. This can include personal blog posts, social media content and even copyrighted material, where sensitive information may be unknowingly incorporated into an AI model. Additionally, companies are managing larger datasets than ever before, which creates an even-greater risk of a data security breach in which sensitive personal information may be exposed to bad actors.

Building trust requires a full command of how an AI system functions and its ability to achieve proper outcomes. AI systems have been evolving to show more levels of decision making. For business outcomes, AI could include the rationale for key decisions to help its customer understand why, for example, a healthcare claim was denied. Businesses could also notify customers when they are interacting with an AI system and provide clear descriptions of the role it plays.

Hallmarks of Responsible AI

Over the past couple of years, many companies have been considering establishing or managing AI governance frameworks. When we meet with companies to evaluate their AI governance approach, we query them to understand: What are your AI policies and governance structures? Who staffs them and what are key roles and responsibilities? How are they resourced? What frameworks do you use? What have these efforts looked like in practice, and are you assessing and reporting on them regularly?

The company engagement often involves an active, multi-year approach that can yield tangible results such as publishing inaugural AI principles, implementing or strengthening AI governance, mitigating algorithmic biases in products and services, and assessing the downstream human rights risks and lifecycle impacts of AI products. We’ve seen success working with companies to establish basic building blocks to demonstrate that their systems are performing and identifying risks well.

Best practices for companies using AI include establishing formal internal governance structures with board-level oversight of material issues and aligning risk-management programs with established external frameworks such as The National Institute of Standards and Technology (NIST) AI Risk Management Framework. This framework offers a voluntary and structured approach to identifying, assessing and mitigating AI-related risks that emphasizes key characteristics of trustworthy AI. It’s a flexible framework that can be applied by companies of all sizes and in any sector.

Companies should have policies, practices and controls that prove these systems work, including:

  • Establishing clear governance structures throughout business strategies that rely on AI and disclosing a set of AI principles and commitments.
  • Integrating safety, bias and privacy considerations into product life cycles.
  • Providing clear insight into the lifecycle processes for product review.
  • Creating, strengthening and updating policies and programs for AI applications, to consider individual impacts and unintended consequences and provide methods for resolution or escalation of material issues.
  • Disclosing performance metrics, such as how many AI products were reviewed for heightened risks and what percentage were consequently modified or halted.

Transparency and accountability are also important aspects of building trust in the AI outcomes. Companies are making significant investments in AI. Investors should be able to understand and evaluate the company’s efforts and progress. Companies should also demonstrate how their risk management processes prepare their businesses to mitigate these risks and capitalize on opportunities.

A Business Imperative

Just as GenAI has become an essential part of business operations, the responsible use of AI has become an essential strategy and component of risk management. And as AI systems become more powerful and embedded in society, investor stewardship to understand company policies and how they align with best practices through company engagements and proxy voting is increasingly critical.

This is not a temporary trend or a passing concern. It represents a fundamental shift in evaluating corporate responsibility and long-term value creation. The companies and investors that lead this transition can help ensure that this transformative technology is developed in a sustainable way that is aligned with long-term value creation.

Real-World Impact: Case in Point

While artificial intelligence presents a spectrum of challenges, from the massive energy consumption of data centers to concerns about worker displacement, our stewardship approach focuses on the ethical considerations and downstream risks AI poses when business outcomes have unintended consequences. Our engagements on this topic often look for assurance, transparency and accountability among companies integrating AI features into their products.

In 2023, we began engaging Intuit (INTU) to better understand how their AI governance approach was aligned with best practices. The financial software provider has integrated AI throughout its product offerings in tax and personal finance software as well as its email marketing platform. Following our engagement on the topic, Intuit released public disclosure on AI governance and risk management practices in the first half of 2025.

Investor Perspective: Responsible AI Recommendations
This year, Parnassus led the development of a set of investor recommendations
for responsible AI in partnership with other investors. The document is
intended to be a helpful reference for companies considering how to approach
an ethical technology governance structure.

Sustainable_Insights_The_New_Imperative_of_Responsible_AI_Final.pdf

Read more Insights from Parnassus.

VIEW ALL INSIGHTS

© 2010-2025 Parnassus Investments, LLC.

PARNASSUS, PARNASSUS INVESTMENTS, and PARNASSUS FUNDS are federally registered trademarks of Parnassus Investments, LLC.

Important Information

ENVIRONMENTAL, SOCIAL AND GOVERNANCE (ESG) GUIDELINES The Parnassus strategies evaluate ESG factors as part of the investment decision-making process, considering a range of impacts they may have on future revenues, expenses, assets, liabilities and overall risk. The strategies also utilize active ownership to encourage more sustainable business policies and practices and greater ESG transparency. Active ownership strategies include proxy voting, dialogue with company management, sponsorship of shareholder resolutions, and public policy advocacy. There is no guarantee that the ESG strategies will be successful.

Mutual fund investing involves risk, and loss of principal is possible.

Earnings Growth is not a measure of future performance.

Shares of the Funds are offered only for sale in the United States. The Funds are not registered for sale in any other country. Nothing on this site should be considered a solicitation to buy or an offer to sell shares of the Funds in any jurisdiction where the offer or solicitation would be unlawful under the securities laws of such foreign jurisdiction or the United States.

For the current holdings of the Parnassus Core Equity Fund, the Parnassus Growth Equity Fund, the Parnassus Value Equity Fund, the Parnassus Mid Cap Fund, the Parnassus Mid Cap Growth Fund, the Parnassus International Equity Fund, the Parnassus Core Select ETF, and the Parnassus Value Select ETF, please visit each fund’s individual holdings page. Fund holdings are subject to change at any time.

ETFs are subject to additional risks that do not apply to conventional funds, including the risks that the market price of an ETF's shares may trade at a premium or discount to its net asset value, an active secondary trading market may not develop or be maintained, or trading may be halted by the exchange in which they trade, which may impact an ETF's ability to sell its shares. Shares of any ETF are bought and sold at market price (not NAV) and are not individually redeemed from the ETF. Brokerage commissions will reduce returns.

For current quarter-end standardized performance and disclosure information, please click here.

Performance data quoted represent past performance and are no guarantee of future returns. Current performance may be lower or higher than the performance data quoted. Current performance information to the most recent month end is available on the Parnassus website (www.parnassus.com). Investment return and principal value will fluctuate so that an investor's shares, when redeemed, may be worth more or less than their original principal cost.

Before investing, an investor should carefully consider the investment objectives, risks, charges and expenses of a fund and should carefully read the prospectus or summary prospectus, which contains this and other information and can be found in our Literature and Forms section or by calling (800) 999-3505 for a mutual fund prospectus or (855) 514-4443 for an ETF prospectus.

The Parnassus Funds are distributed by Parnassus Funds Distributor, LLC.

Research our firm with FINRA's BrokerCheck.

This site is intended for U.S. investors only and should not be considered a solicitation or offering to investors outside the U.S.