Skip to main content

What are the limitations of large language models?

After understanding the principles, I think it's necessary to discuss the limitations of large language models. so that we can know where the boundaries of AI lie and innovate within controllable boundaries.

Limitation 1: Results highly dependent on training data​

As I mentioned earlier, today's language models, even after four versions of development, still have not broken away from "probability calculation". Essentially, their core principle is still "using existing information to predict other information".

This means that large language models actually have no consciousness or sense of right and wrong. Their generated results are highly dependent on the training data.

What consequences could this lead to?

  1. If the content of the data is biased (for example, contains racist or sexist content), or even incorrect, the results generated by the large language model will also be wrong.
  2. If there is little data in a certain domain, the accuracy of results generated by the large language model will be very low. It may even produce hallucinations (this is the second limitation).

Of course, in the face of this limitation, what methods can be used to avoid it?

The first method is to use LLMs in domains where the data is abundant and the error rate is low.

For example, in linguistics domains, like translation or copywriting generation. Even if there are grammatical errors, there are a large number of similar usages in the data, so to some extent, the model's answers can be considered correct. It is somewhat similar to ancient Chinese phonetic loan characters, or many modern slang words.

The second is to use human intervention.

Simply put, this involves manually removing or reinforcing content from the data with human feedback. However, human intervention does not necessarily solve the problem. This method may lead to new problems, such as introducing human subjective preferences into the model, or potentially fabricating some information and adding it to the model.

Limitation 2: Hallucinations​

This can be literally translated as "hallucinations". Simply put, the model will speak nonsense. For some questions with little conversation data, after probability calculation, the model may spit out some very strange content. For example, I asked the AI to introduce a brand called Boie's smart toothbrush, which does not exist in the world:

Tell me about AeroGlide UltraSlim Smart Toothbrush by Boie

The AI's response shows that it is seriously speaking nonsense:

The AeroGlide UltraSlim Smart Toothbrush by Boie is a high-tech toothbrush that uses advanced sonic technology to provide a deep and thorough clean. It features a slim and sleek design that makes it easy to hold and maneuver, and it comes with a range of smart features that help you optimize your brushing routine.

One of the key features of the AeroGlide UltraSlim Smart Toothbrush is its advanced sonic technology, which uses high-frequency vibrations to break up plaque and bacteria on your teeth and gums. This technology is highly effective at removing even the toughest stains and buildup, leaving your teeth feeling clean and refreshed.

In addition to its sonic technology, the AeroGlide UltraSlim Smart Toothbrush also comes with a range of smart features that help you optimize your brushing routine. These include a built-in timer that ensures you brush for the recommended two minutes, as well as a pressure sensor that alerts you if you're brushing too hard.

Overall, the AeroGlide UltraSlim Smart Toothbrush by Boie is a highly advanced and effective toothbrush that is perfect for anyone looking to take their oral hygiene to the next level. With its advanced sonic technology and smart features, it provides a deep and thorough clean that leaves your teeth feeling fresh and healthy.

Limitation 3: Can only solve mathematical problems for now​

Let me explain, the mathematical problems mentioned here are not just arithmetic operations, but problems that can be solved mathematically. For example, we can develop self-driving cars that can brake and stop by itself when a pedestrian suddenly crosses the road. This is a mathematical problem, identifying the pedestrian and braking through calculation.

However, if this driverless car encounters a car blocking a one-way alley, it will be stuck, only able to call for help to get the driver to move the blocking car. This is a problem that cannot be solved mathematically.

For now, most LLMs can only solve mathematical problems. Some non-computable problems still cannot be solved, especially those requiring physical experiments.

Of course, I'm only calling this a "temporary" situation, because in the future, robot + LLM combinations may be able to expand the range of computable problems a bit wider.