So we thought we are ready, and in a blink (Bing?) of an eye we weren't, because there was no precedent and of course society as a whole didn't had the time to digest the occurrence. We are not talking about an AI singularity event, but the public emergence into a society that didn't encounter it, as most things happen in this corner of the universe. Even if the image, video, audio and to some degree the text generative AI paved the way for a while now, it is quite another thing to interact live with a large language model chatbot.

We believe that the level we reached this year is unprecedented, but that doesn't mean that the general public should bow in awe; instead maybe approach AI with a grain of caution. It will transform the society, but hardly as we think and maybe not for the best due to this confusion that it can do things for us... because it is not that good for now at anything else except entertaining, trivial productivity tasks and weirdly cutting corners. Even when it works properly. Not to mention promoting its owners.

A technological feat? Sure. But a really useful one? For now it still doesn't look like it, and it can be easily put to test, until one obtains the famous hallucinations. Because sooner rather than later the chatbots will hit one. And at this point we should be glad it is not evolving/transforming quite by itself and we don't have to deal with a general AI, at best with a narrow AI.

Smiles aside, all of these combined with a lack of analysis about how it can impact the human psyche, our daily activities and even the employment landscape, the global economy and geopolitical stability, warrant a pause. This study offers just a glimpse of how AI will disrupt the higher-income labor market, but we need more. The now famous open letter from Future of Life and the weight of the people behind it could be a sign that we need to better prepare for AI, if ignoring the unwarranted suspicions of any other ulterior motives.

More worrisome, the rush for monetization will force the major AI players into truly misusing the technology: profiling goes hand-in-hand with advertising, and advertising is regarded as a must in an expensive but "free" technology. Sometimes this goes way too far, and consumer rights are completely forgotten.

The Italian data protection agency prompted a swift block of ChatGPT due to a leak regarding payments and user information, until further investigation is concluded, as it intends to fully question the data gathered by OpenAI, the translation of the document stating that "As confirmed by the tests carried out so far, the information made available by ChatGPT does not always match factual circumstances, so that inaccurate personal data are processed". Protecting the children is another point taken, as it seems there is no adequate filtering in providing answers that might be inappropriate for their age and frankly - even dangerous answers.

Germany, Ireland and France are also following track on taking a stance against the dangers of AI, and we will keep an eye on the news.

The entire text here begs a few serious questions:

Is the Turing test still enough?

Yes it is, but it shouldn't be used except for a test. The tactic of using AI and presenting the results as produced by a human when sometimes they are not even vetted/fact checked by one is misleading at best. And in communications we believe there is no place for AI, when impersonating a human, stated or not as such. We are pointing here for example at a public relations bot deployed by a bank, or even call support services for your local store.

Is it necessary for governments to get involved into regulating AI?

It might be the best course of action, as long as it is done transparently, and not abused in any way. Western countries should enforce public disclosure of AI use on the Internet and in business interests; because we have it for the browser cookies, and AI deserves at least the same level of scrutiny.

Should governments be allowed to use AI themselves?

Yes, but not when human lives and welfare are at stake. Any results should be verified by a human, and the use of AI declared publicly. Also there should be legal mechanisms in place to easily contest any AI involvement. Political influencing, electoral interference (in fact any kind of social manipulation via AI), economical or military warfare should be completely banned, by international treaties as serious as the nuclear ones.

How will the birth of AI affect procurement?

A point where we should also pause and reflect because it is not far fetched to assume that AI will be used to triage the proposals, in the same manner that big companies already triage considerable amounts of CVs.

Is there a future for Open Source in AI development?

As usual, on this website the answer is "Yes". It is unacceptable from any angle imagined to proceed differently. Complete scrutiny to what the code contains and how it works, can at least alleviate some of the shortcomings of nowadays AI. The Mozilla Foundation already tries to spearhead the development of trustworthy AI and we can but only applaud such initiatives.