Ever since conversational chatbots that leverage the power of generative AI (GenAI) became openly available to the public, their adoption has been phenomenal. In the wake of OpenAI’s release of the even more sophisticated ChatGPT-4o with the ability to reason across sound, vision and text in real-time, the allure is likely even stronger.
With the ability to take on roles like customer service agent, executive assistant or designer, the latest large language models (LLMs) have already had a transformative impact on how knowledge-based tasks are performed.
Yet in the rush to keep up with the pace of change, the downsides to the technology are also playing out in real time.
From inaccurate outputs to losing ownership of private datasets, the risks are real and could undermine trust in AI if not properly addressed.
With more than 330,000 employees in over 80 countries and regions, the NTT Group is a world leader in providing technology and business solutions. The NTT Innovation Laboratory Israel facilitates synergies between NTT and the Israeli ecosystem, helping to develop innovative solutions for partners that include multinationals and venture capitalists right the way through to government and academia.
We had the opportunity to speak to Moshe Karako, Chief Technology Officer at the NTT Innovation Lab in Israel to get a more detailed understanding of how AI can be used safely and ethically in the real world.
Where do the main concerns about using AI come from?
As chatbots and virtual assistants become widely ubiquitous, the question of security is often an afterthought. In many cases, companies are choosing to leverage AI for a specific purpose, such as handling customer inquiries, which means an informed decision was likely made. However, GenAI tools are now so freely and easily available that it’s become harder to manage how and where the software is being applied to support company operations. Tasking AI to draft a batch of emails probably seems innocuous to most employees, but it can create a number of security vulnerabilities. We know that hackers will look to exploit the use of public AI tools, either to install malware or access data. It’s also important to remember that data entered into third-party AI applications is likely to be collected and stored by the provider. When it comes to freely available GenAI models, data privacy is only possible by “opting out” of using them entirely. This reality has led some companies to issue a blanket ban on their use. However, this approach means the benefits of AI are also lost.
Are there ways to address these issues around security and privacy?
Many of our clients see the clear advantages that conversational AI can offer. However, leveraging the LLMs offered by third-party tech providers isn’t an option in many cases. This may be because the data being handled is highly sensitive or extremely valuable. Many corporations have strict policies that aim to safeguard IP which means any use of public LLMs would be a breach of regulations. We’re seeing a rise in the number of private LLMs being developed across industries. For example, a famous luxury perfume brand uses a unique vault to keep the precious IP of the perfume’s formulae secure while also allowing AI to suggest new formulae for scents based on unique training models. Although the price associated with developing a private LLM means that private enterprises have more flexibility to explore this option, we’re also seeing growing demand from the public sector to safeguard data at the government level.
Aside from private LLMs, how else can security be improved?
AI works best when it has vast, rich data sets to train with. Private LLMs address security and privacy concerns very well, but the downside is that data becomes siloed. Federated learning is one solution that may become more common in the future. Federated learning allows the algorithm itself to train across various data sources in an anonymous manner. For example, AI promises to revolutionize healthcare but developing reliable algorithms that can be validated in a clinical setting won’t be achieved with small data sets. In theory, federated learning could provide an answer, allowing the algorithmic model to train on information from a variety of hospitals and health systems without needing to collate all the sources together. The problem is there is not yet proper standardization in place, making large-scale collaborations difficult.
Can you offer any final recommendations on how to balance the risks and rewards of AI?
The arrival of offerings like ChatGPT have helped to democratize access to AI. Yet there are good sides and bad sides to any emerging technology and GenAI is no different. Each use case needs to be individually evaluated to understand the pros and cons in detail. For instance, it’s common knowledge that GenAI suffers from hallucinations, or as I like to call it, daydreaming. The responses can be comical at times but are far from ideal if they appear within a company’s service offering. In addition, saving time by outsourcing tasks to AI isn’t always beneficial, as discovered by the development team from a major tech firm when CoPilot was used to help out with coding. The immediate task was taken care of but full ownership to the vital source code was lost as result. To summarize, ust because we have access to a technology doesn’t mean it should be used indiscriminately. To support the safe and trustworthy use of AI, we also need to take ownership of the decision of when and where to apply these tools.
Dalia Cohen has worked in magazines such as Newsweek, Fortune and TechCrunch in her editorial career. She is actively involved in many NGOs and writes articles on topics such as politics, technology and business. She is also actively working on antisemitism and women's rights.