What you REALLY need to know about AI

There's no denying that Artificial Intelligence ("AI") is sexy. It's new, it's controversial, and it is BOOMING. It seems that everyone is trying to get in on the AI bandwagon offering new tools to make work more efficient, marketing more effective, and downtime more fun.

When it comes to your business, however, it is a good idea to practice a modicum of restraint with AI. The technology is so new that the law has not caught up with it yet, and probably won't for several years. Moreover, when technology is so broadly applied to all kinds of data, it begs security questions. Finally, AI is generative in nature, which means it is prone to "hallucinations," i.e., it makes up stuff based on your data and, therefore, can be unreliable in the wrong hands.

Let's unpack that line by line. But first...

What Is AI, Anyway?

People talk about AI as though it is a single thing. It isn't. Just like different operating systems run on different computers, each AI system is distinct. Some AI engines are better at handling graphics or images and creating composites of the data set, some are better at predicting the outcomes of laboratory experiments based on prior results, and some are better at crunching huge data sets to forecast outcomes (e.g., stock market predictions or climate change). So before jumping into the AI pool, know which AI you are using and its intended use.

Most AI for business is of the number crunching and/or predictive text varieties we call "Generative AI" because it generates "new" data from existing data. Basically, generative AI uses an artificial neural network (e.g., a software platform that mimics a neural network) that contains a number of hidden layers through which data is processed. The more data the AI is fed, the more it learns. The more the AI learns, the more it is able to recognize complex patterns, make connections, and weigh input. Before an AI can be turned into an application you can use in your business, it has to begin learning by processing existing data sets that are (or should be) similar to yours. As with anything computer-related, garbage in equals garbage out. In other words, the better the data used to train the AI, the better the predictive analytics it will produce.

Now then...

The Law Is a Decade Behind the Tech

To say that AI developers have little statutory guidance is an oversimplification. More accurately, the applicable law was written for circumstances that existed more than a century ago. To date, the United States has no comprehensive data privacy legislation. It is just a patchwork of state laws and federal regulations, but nothing that addresses today's technology head-on.

The law that we know applies to AI includes intellectual property laws (i.e., trademark, copyright, patent, and trade secret laws), laws regulating the commercial use of a person's name, image, and likeness, and other laws regarding the infringement of personal rights. Then, there are laws regulating commerce generally, like the Florida Deceptive & Unfair Trade Practices Act ("FDUPTA") and the rules regarding the discovery of electronically stored information in litigation.

Note that the  AI is not a lawyer - because that is not a thing - and neither are the people coding the AI. You cannot trust an AI to write contracts, keep you compliant with the law, or maintain secrets or privacy. AI is a tool, not a replacement for professional services.

TLDR: Before you adopt AI technology for your business, you have to know what AI is doing with your data to determine what laws in the current patchwork affect your use of AI. Never adopt an AI without first running it on test data and seeing what your input and output look like. And, of course, never adopt a new AI without having a chat with your attorney.

Security Issues Abound

Once an AI system incorporates data into itself, the data is "learned" and cannot be unlearned. So, if an AI is given a data set containing information that is subject to data privacy legislation (like personally identifiable information subject to HIPAA), then it can't simply be deleted from the AI.

Certain laws impose on a business owner the duty to safeguard the data of third parties. These include everything from the European Union's GDPR privacy legislation to your contract with your merchant services provider that requires compliance with their PCI standards. In general, you have a duty to protect your customers' names, email addresses, home addresses, social security numbers, payment information, health information, and other similar personal data from access by third parties. That includes submitting that data to an AI that is not a closed environment (i.e., limited to your business and only your business). You also have a duty to tell your customers if their data will be processed by an AI, which is usually done in your online privacy policy. And since all AI is data-driven, it is as susceptible to hacking as any other database. Your AI must be secured from intruders, and any data breach of an AI must be disclosed to the persons affected.

Generative Technology Can Create Unexpected Results

Most business owners do not understand the predictive and generative nature of AI. Unlike a 20th-century database platform, AI exists to modify the data it stores. Once your data is fed to the AI, it will be assimilated, aggregated, and used to create new data. It is imperative that you understand how you will use the output.

If, for example, you want to use generative AI to write the Great American Novel, you might feed the AI the works of Melville, Twain, Faulkner, Hemmingway, Bradbury, and King to inform the AI, then give the AI some plot points: write a novel about a road trip by two brothers from New York to California. The AI will spit out a novel borrowed from the authors it has "read," but it lacks the ethical framework to avoid copyright infringement or plagiarism. So, if you intended to publish the novel as your own, the work produced by the AI probably would not fit that purpose.

If, however, you are a physician who will have a closed AI system produce case notes after a patient visit based on the way you have done charting in the past, that you then edit rather than having to write out every note, then the AI might be useful to you (IF it were also HIPAA compliant, but assume it isn't because AI is just too new).

Generative AI can "hallucinate" and create new data. For example, an attorney used an AI to perform legal research, and the AI created a fictional case that the lawyer then cited in a court brief. The result was a lawyer being sanctioned for fraud on the court.

The moral of the story is that AI cannot be trusted to be ethical. You have to police it carefully and use it wisely.