Wednesday, February 14, 2024

Secure AI usage both at home and at work | Kaspersky official blog

Last year’s explosive growth in AI applications, services, and plug-ins looks set to only accelerate. From office applications and image editors to integrated development environments (IDEs) such as Visual Studio — AI is being added to familiar and long-used tools. Plenty of developers are creating thousands of new apps that tap the largest AI models. However, no one in this race has yet been able to solve the inherent security issues, first and foremost the minimizing of confidential data leaks, and also the level of account/device hacking through various AI tools — let alone create proper safeguards against a futuristic “evil AI”. Until someone comes up with an off-the-shelf solution for protecting the users of AI assistants, you’ll have to pick up a few skills and help yourself.

So, how do you use AI without regretting it later?

Filter important data

The privacy policy of OpenAI, the developer of ChatGPT, unequivocally states that any dialogs with the chatbot are saved and can be used for a number of purposes. First, these are solving technical issues and preventing terms-of-service violations: in case someone gets an idea to generate inappropriate content. Who would have thought it, right? In that case, chats may even be reviewed by a human. Second, the data may be used for training new GPT versions and making other product “improvements”.

Most other popular language models — be it Google’s Bard, Anthropic’s Claude, or Microsoft’s Bing and Copilot — have similar policies: they can all save dialogs in their entirety.

That said, inadvertent chat leaks have already occurred due to software bugs, with users seeing other people’s conversations instead of their own. The use of this data for training could also lead to a data leak from a pre-trained model: the AI assistant might give your information to someone if it believes it to be relevant for the response. Information security experts have even designed multiple attacks (one, two, three) aimed at stealing dialogs, and they’re unlikely to stop there.

So, remember: anything you write to a chatbot can be used against you. We recommend taking precautions when talking to AI.

Don’t send any personal data to a chatbot. No passwords, passport or bank card numbers, addresses, telephone numbers, names, or other personal data that belongs to you, your company, or your customers must end up in chats with an AI. You can replace these with asterisks or “REDACTED” in your request.

Don’t upload any documents. Numerous plug-ins and add-ons let you use chatbots for document processing. There might be a strong temptation to upload a work document to, say, get an executive summary. However, by carelessly uploading of a multi-page document, you risk leaking confidential data, intellectual property, or a commercial secret such as the release date of a new product or the entire team’s payroll. Or, worse than that, when processing documents received from external sources, you might be targeted with an attack that counts on the document being scanned by a language model.

Use privacy settings. Carefully review your large-language-model (LLM) vendor’s privacy policy and available settings: these can normally be leveraged to minimize tracking. For example, OpenAI products let you disable saving of chat history. In that case, data will be removed after 30 days and never used for training. Those who use API, third-party apps, or services to access OpenAI solutions have that setting enabled by default.

Sending code? Clean up any confidential data. This tip goes out to those software engineers who use AI assistants for reviewing and improving their code: remove any API keys, server addresses, or any other information that could give away the structure of the application or the server configuration.

Limit the use of third-party applications and plug-ins

Follow the above tips every time — no matter what popular AI assistant you’re using. However, even this may not be sufficient to ensure privacy. The use of ChatGPT plug-ins, Bard extensions, or separate add-on applications gives rise to new types of threats.

First, your chat history may now be stored not only on Google or OpenAI servers but also on servers belonging to the third party that supports the plug-in or add-on, as well as in unlikely corners of your computer or smartphone.

Second, most plug-ins draw information from external sources: web searches, your Gmail inbox, or personal notes from services such as Notion, Jupyter, or Evernote. As a result, any of your data from those services may also end up on the servers where the plug-in or the language model itself is running. An integration like that may carry significant risks: for example, consider this attack that creates new GitHub repositories on behalf of the user.

Third, the publication and verification of plug-ins for AI assistants are currently a much less orderly process than, say, app-screening in the App Store or Google Play. Therefore, your chances of encountering a poorly working, badly written, buggy, or even plain malicious plug-in are fairly high — all the more so because it seems no one really checks the creators or their contacts.

How do you mitigate these risks? Our key tip here is to give it some time. The plug-in ecosystem is too young, the publication and support processes aren’t smooth enough, and the creators themselves don’t always take care to design plug-ins properly or comply with information security requirements. This whole ecosystem needs more time to mature and become securer and more reliable.

Besides, the value that many plug-ins and add-ons add to the stock ChatGPT version is minimal: minor UI tweaks and “system prompt” templates that customize the assistant for a specific task (“Act as a high-school physics teacher…”). These wrappers certainly aren’t worth trusting with your data, as you can accomplish the task just fine without them.

If you do need certain plug-in features right here and now, try to take maximum precautions available before using them.

  • Choose extensions and add-ons that have been around for at least several months and are being updated regularly.
  • Consider only plug-ins that have lots of downloads, and carefully read the reviews for any issues.
  • If the plug-in comes with a privacy policy, read it carefully before you start using the extension.
  • Opt for open-source tools.
  • If you possess even rudimentary coding skills — or coder friends — skim the code to make sure that it only sends data to declared servers and, ideally, AI model servers only.

Execution plug-ins call for special monitoring

So far, we’ve been discussing risks relating to data leaks; but this isn’t the only potential issue when using AI. Many plug-ins are capable of performing specific actions at the user’s command — such as ordering airline tickets. These tools provide malicious actors with a new attack vector: the victim is presented with a document, web page, video, or even an image that contains concealed instructions for the language model in addition to the main content. If the victim feeds the document or link to a chatbot, the latter will execute the malicious instructions — for example, by buying tickets with the victim’s money. This type of attack is referred to as prompt injection, and although the developers of various LLMs are trying to develop a safeguard against this threat, no one has managed it — and perhaps never will.

Luckily, most significant actions — especially those involving payment transactions such as purchasing tickets — require a double confirmation. However, interactions between language models and plug-ins create an attack surface so large that it’s difficult to guarantee consistent results from these measures.

Therefore, you need to be really thorough when selecting AI tools, and also make sure that they only receive trusted data for processing.



from Kaspersky official blog https://ift.tt/7nwpAtF
via IFTTT

No comments:

Post a Comment