How to responsibly scale business-ready generative AI
Imagine the possibilities of providing text-based queries and opening a world of knowledge for improved learning and productivity. Possibilities are growing that include assisting in writing articles, essays or emails; accessing summarised research; generating and brainstorming ideas; dynamic search with personalised recommendations for retail and travel; and explaining complicated topics for education and training. With generative AI, search becomes dramatically different. Instead of providing links to multiple articles, the user will receive direct answers synthesised from a myriad of data. It’s like having a conversation with a very smart machine.
What is generative AI?
Generative AI uses an advanced form of machine learning algorithms that takes users prompts and uses natural language processing (NLP) too generate answers to almost any question asked. It uses vast amounts of internet data, large-scale pre-training and reinforced learning to enable surprisingly human like user transactions. Reinforcement learning from human feedback (RLHF) is used, adapting to different contexts and situations, becoming more accurate and natural overtime. Generative AI is being analysed for a variety of use cases including marketing, customer service, retail and education.
ChatGPT was the first but today there are many competitors
ChatGPT uses a deep learning architecture call the Transformer and represents a significant advancement in the field of NLP. While OpenAI has taken the lead, the competition is growing. According to Precedence Research, the global generative AI market size valued at USD 10.79 in 2022 and it is expected to be hit around USD 118.06 by 2032 with a 27.02% CAGR between 2023 and 2032. This is all very impressive, but not without caveats.
Generative AI and risky business
There are some fundamental issues when using off-the-shelf, pre-built generative models. Each organisation must balance opportunities for value creation with the risks involved. Depending on the business and the use case, if tolerance for risk is low, organisations will find that either building in house or working with a trusted partner will yield better results.
Concerns to consider with off the shelf generative AI models include:
- Internet data is not always fair and accurateAt the heart of much of generative AI today is vast amounts of data from sources such as Wikipedia, websites, articles, image or audio files, etc. Generative models match patterns in the underlying data to create content and without controls there can be malicious intent to advance disinformation, bias and online harassment. Because this technology is so new there is sometimes a lack of accountability, increased exposure to repetitional and regulatory risk pertaining to things like copyrights and royalties.
- There can be a disconnect between model developers and all model use cases
- Downstream developers of generative models may not see the full extent of how the model will be used and adapted for other purposes. This can result in faulty assumptions and outcomes which are not crucial when errors involve less important decisions like selecting a product or a service, but important when affecting a business-critical decision that may open the organisation to accusation of unethical behaviour including bias, or regulatory compliance issues that can lead to audits or fines.
- Litigation and regulation impacts use.
Concern over litigation and regulations will initially limit how large organisations use generative AI. This is especially true in highly regulated industries such as financial services and healthcare where the tolerance is very low for unethical, biased decisions based on incomplete or inaccurate data and models can have detrimental repercussions.
Eventually, the regulatory landscape for generative models will catch up but companies will need to be proactive in adhering to them to avoid compliance violations, harm to their company’s reputation, audits and fines.
What can you do now to scale generative AI responsibly?
As the outcomes of AI insights become more business-critical and technology choices continue to grow, you need assurance that your models are operating responsibly with transparent process and explainable results. Organisations that proactively infuse governance into their AI initiatives can better detect and mitigate model risk while strengthening their ability to meet ethical principles and government regulations.
Of utmost importance is to align with trusted technologies and enterprise capabilities. You can start by learning more about the advances Turba Media is making in new generative AI models with Turba Media ai and proactively put Turba Media.governance in place to drive responsible, transparent and explainable AI workflows, today and for the future.
What is Turba Media governance?
Turba Media governance provides a powerful governance, risk and compliance (GRC) tool kit built to operationalise AI lifecycle workflows, proactively detect and mitigate risk, and to improve compliance with the growing and changing legal, ethical and regulatory requirements. Customisable reports, dashboards and collaborative tools connect distributed teams, improving stakeholder efficiency, productivity and accountability. Automatic capture of model metadata and facts provide audit support while driving transparent and explainable model outcomes.
Accelerate governance and simplify risk management across your entire organisation with Turba Media OpenPages, a unified governance, risk and compliance (GRC) solution to help manage, monitor and report on risk and compliance. Learn more about how Turba Media governance is driving responsible, transparent and explainable AI workflows and the enhancements coming in the future.