We could say goodbye to ChatGPT weirdness thanks to Nvidia

Nvidia is the tech giant behind the GPUs that power our games, run our creative suites, and – as of late – play a crucial role in training the generative AI models behind chatbots like ChatGPT. The company has dived deeper into the world of AI with the announcement of new software that could solve a big problem chatbots have – going off the rails and being a little…strange.

The newly-announced ‘NeMo Guardrails’ (opens in new tab) is a piece of software designed to ensure that smart applications powered by large language models (LLMs) like AI chatbots are “accurate, appropriate, on topic and secure”. Essentially, the guardrails are there to weed out inappropriate or inaccurate information generated by the chatbot, stop it from getting to the user, and inform the bot that the specific output was bad. It’ll be like an extra layer of accuracy and security – now without the need for user correction.

The open-source software can be used by AI developers to set up three types of boundaries for the AI models: Topical, safety and security guidelines. It’ll break down the details of each – and why this sort of software is both a necessity and a liability.

What are the guardrails?

Topical guardrails will prevent the AI bot from dipping into topics in areas that are not related or necessary to the use or task. In the statement from Nvidia, we are given the example of a customer service bot not answering questions about the weather. If you’re talking about the history of energy drinks, you wouldn’t want ChatGPT to start talking to you about the stock market. Basically, keeping everything on topic.

This would be useful in huge AI models like Microsoft’s Bing Chat, which has been known to get a bit off-track at times, and could definitely ensure we avoid more tantrums and inaccuracies.

The Safety guardrail will tackle misinformation and ‘hallucinations’ – yes, hallucinations – and will ensure the AI will respond with accurate and appropriate information. This means it’ll ban inappropriate language, reinforce credible source citations as well as prevent the use of fictitious or illegitimate sources. This is especially useful for ChatGPT as we’ve seen many examples across the internet of the bot making up citations when asked.

And for the security guardrails, these will simply stop the bot from reaching external applications that are ‘deemed unsafe’ – in other words, any apps or software it hasn’t been given explicit permission and purpose to interact with, like a banking app or your personal files. This means you’ll be getting streamlined, accurate, and safe information each time you use the bot.

Morality Police 



Source link

Netflix minus 1M users in Spain over no-password-sharing policy Previous post Netflix minus 1M users in Spain over no-password-sharing policy
Best PS5 deal: Save  on ‘God of War Ragnarok’ for PS5 Next post Best PS5 deal: Save $30 on ‘God of War Ragnarok’ for PS5