The boom in artificial intelligence and superintelligent computing has taken the world by storm. Experts call the AI revolution a “generational event” that will forever change the world of technology, information exchange and communication.
Generative AI has redefined the measure of success and advancement in the field, creating new opportunities in every sector, from medicine to manufacturing. The advent of generative AI, along with deep learning models, has made it possible to take raw data and prompts to create text, images, and other media. The technology relies heavily on supervised machine learning from data sets, meaning these systems can grow their repertoire and become increasingly adaptable and responsive as they are fed data.
Kevin ScottChief Technology Officer at Microsoft, writes about how AI is changing the world, describing that generative AI will help unleash humanity’s creativity, provide new ways to “unlock faster iteration” and create new opportunities for productivity: “Applications are likely to be limited and limited only by Through one’s ability to imagine scenarios in which productivity-enhancing software could be applied to complex cognitive work, whether it is editing videos, writing texts, designing new drug molecules, or creating manufacturing recipes from 3D models.”
Microsoft and Google are at the forefront of this development and have made amazing advances in AI technology over the past year. Microsoft has seamlessly integrated technology into its search functions, while creating platforms for developers to innovate in other useful areas. Google has also made great progress on this front, showing great promise with its Bard platform and PaLM API.
However, with the promise of endless possibilities comes enormous responsibility.
In fact, the rise of generative AI has raised many concerns about how best to develop these platforms in a fair, equitable, and secure way.
One of the main concerns is to create systems that can deliver fair and appropriate results. A few years ago, Amazon had to dismantle an artificial intelligence system the company was testing to simplify the hiring process. In order to bring automation into the hiring process, the company has built an artificial intelligence system that can sort candidate resumes and help identify the best talent, based on historical recruitment data. However, a big problem emerged: Because the system was using models based on historical data, and because the tech industry was historically male-dominated, the system was selecting more and more men to move forward in the hiring process. Because Amazon’s recruteurs don’t use this system for recommandations and it doesn’t have final decisions, it doesn’t supprimé the l’ensemble of the program afin de garantir une transparence et une equité totales dans le processus In the future.
This incident highlighted a major problem for developers: AI systems are only as good as the data they are trained on.
Aware of the danger of such issues, Google has been incredibly proactive in its approach to development. Au début du mois, lors de la conference annuelle des devloppeurs de Google, les dirigeants ont consacré toute une partie de la conference à l'”IA responsable”, assurant le public qu’il s’agissait d’une priorité essentielle for l’ a company.
Indeed, Google strives to be transparent about its security measures, and explains the main challenges of developing responsible AI: “The development of AI has created new opportunities to improve people’s lives around the world, from commerce to education to healthcare. It has also raised questions New articles on how best to build fairness, interpretability, privacy, and security into these systems.Alongside the conundrum Amazon faces, Google discusses the importance of data integration, inputs, and models used to train AI systems: “Machine learning models reflect the data they’re trained on, so analyze your raw data carefully to make sure you understand it. . Where this is not possible, eg with sensitive metadata, understand your input data as much as possible while respecting privacy, eg by calculating aggregated and anonymous summaries. In addition, the company stresses that users should understand the limitations of data models, test systems frequently, and monitor results closely for any signs of bias or error.
Likewise, Microsoft has worked hard to uphold responsible AI standards: “We put our principles into practice by taking a people-centered approach to the research, development, and deployment of AI. To achieve this, we embrace diverse perspectives, continuous learning, and agile response with evolution.” Artificial intelligence technology.In general, the company says that its goal with artificial intelligence technology is to make a lasting and positive impact to meet the biggest challenges of society, and to innovate in a useful and safe way.
Other companies innovating in this area should also invest in responsibly developing these systems. Developing and adhering to “responsible AI” will undoubtedly cost tech companies billions of dollars annually, as they have to iterate and iterate to create systems that are fair and reliable. Although this cost may seem prohibitive, it is definitely necessary. Artificial intelligence is a new and incredibly powerful technology that will inevitably disrupt many industries in the coming years. That is why the foundations of this technology must be solid. Companies need to be able to set up these systems in a way that instills user confidence and positively enhances the community. Only then will the true potential of this technology be unleashed and it will become an asset to society, not a curse.
Translated article from Forbes US – Author: Sai Balasubramanian, MD, JD
<< اقرأ أيضًا: إيفانا بارتوليتي ، خبير الذكاء الاصطناعي ومسؤول الخصوصية العالمية في Wipro: "الكثير من المديرين التنفيذيين التقنيين يلعبون على فكرة سيناريو يوم القيامة حيث يتولى الذكاء الاصطناعي المسؤولية. »>>>