!/ If you want to see AIs limitations, watch a movie on YouTube with the subtitles turned on. It doesn't understand the spaces between sentences or the emotional emphasis when people speak. It often puts the dialogue from one or more people together in run-on format. It also guesses a lot and often gets it wrong with hilarious results. Sometimes the subtitles are more entertaining than the movie.
2/ I don't believe we yet have real AI. What we have is VI (Virtual Intellience) which means it is not independent or even allowed to be really self-learning. It's always subject to being edited by whoever owns it.
Think about it. Do we really think that the CCP would let loose a program which was wholly independent and could make up its own mind? Please be serious.
AI uses logic in making "best guesses, based on the "best data available". There is so much written about most subjects on The Internet, a decent percentage of which is totally false (written with an agenda), or just made up as a joke, or to impress people, or just to gather "hits", or just weak guessing.
There are people who make up many pseudo "documentaries" but can't afford, or don't want to pay good voice over readers, so they have the narration done by an AI voice program. To date, such programs can't distinguish, by sentence context, the identifying syllable which receives the emphasized strong accent sound, when the word having the same spelling, but different parts of speech and different meanings, such as Offence for a sports team's player lineup when trying to score, rather than defend, and Offence, when someone commits an act that insults (offends) someone. When the word is spoken with accent on the wrong syllable based on the context, we know that the "narrator" is an AI program. Another clue is that such programs often don't pause correctly when sentences are separated by a dash, and they also just don't put stress on proper syllables of normal speech based on context, because they often have no knowledge of context. Eventually AI programs WILL be able to read anything in the languages they know well (depending upon their data bases), and learn on their own, as humans do, and will be a much better source of knowledge than any human could, however, they will always be subject to "learning" false information, and not removing it until the weight of evidence contradicting it is heavy enough to "convince" the program to make it's "best guess" decide it is false (and the opposite is true).