Craig Martell, Chief Digital and AI Officer at the Pentagon, said that generative AI systems such as ChatGPT scare him to death.
"Yes, I'm scared to death. That's my opinion," the message reads.
According to him, such systems do not understand the context, but at the same time "authoritatively" answers questions.
"That's why you trust the system, even when it's wrong. And this means that it is an ideal tool for disinformation." Martell added.
At the same time, the Pentagon does not have the tools to detect disinformation and warn about it.
Some countries have already begun to consider ways to limit the risks of using AI.
Here is how ChatGPT itself reacted to this:
It is important to note that ChatGPT and other generative AI systems cannot spread disinformation, since they are not independent agents, but rather tools that are used by people. In the end, individuals are responsible for the use and interpretation of the information produced by such systems.
Instead of being afraid and limiting the use of artificial intelligence, it is necessary to develop regulation and ethical principles for the use of such technologies, as well as train people to recognize and analyze information received from AI systems.
Comments (0)