In the realm of artificial intelligence (AI), an artificial hallucination, often referred to as confabulation or delusion, occurs when an AI produces inaccurate or deceptive data as if it were fact.
For example, if asked to create a financial report for a firm, a delusional chatbot might fabricate information such as the company’s revenue being $20 billion, seemingly without any basis in reality. These instances, similar to hallucinations in psychology, are termed “hallucinations.” One significant difference is that while AI hallucinations involve unreasonable responses or beliefs, human hallucinations typically involve incorrect perceptions. Some experts argue that the phrase “AI hallucination” unreasonably anthropomorphises computers.
During the AI boom and the rise of popular chatbots like ChatGPT, which are based on large language models (LLMs), the term “AI hallucination” gained popularity. Users expressed dissatisfaction with these chatbots’ tendency to incorporate random, implausible lies into their output. Some estimates suggest that chatbots may experience frequent hallucinations up to 27% of the time.
The term “generated content that is nonsensical or unfaithful to the provided source content” is frequently used in natural language processing to describe hallucinations. Hallucinations can be categorised in several ways. They are classified as intrinsic and extrinsic, respectively, based on whether the output conflicts with the source or cannot be confirmed by the source. They can also be classified as closed-domain or open-domain, respectively, based on whether the result contradicts the prompt.
Data-driven hallucinations occur due to source-reference divergence. When a model is trained on data that exhibits divergence between source and reference (target), it may be incentivised to produce language that lacks foundation and is not entirely true to the source.
It has been demonstrated that any flawed generative model taught to maximise training likelihood will statistically inevitably result in hallucinations; hence, active learning (such as reinforcement learning from human feedback) must be avoided. According to other research, hallucinations result from a conflict between novelty and use. An emphasis on novelty in machine creativity may result in the generation of unique but incorrect answers, or lies, while an emphasis on utility may result in rote memorisation of responses that are ineffective.
Hallucinations might result from mistakes made during the encoding and decoding of text and representations. An incorrect generation that departs from the input may occur when encoders pick up incorrect correlations between various elements of the training data. The final target sequence is produced by the decoder using the encoded input from the encoder. Hallucinations are caused by two parts of the decoding process. First, incorrect generation may result from decoders focusing on the incorrect portion of the encoded input source. Second, hallucinations may be exacerbated by the decoding strategy’s design. There is a positive correlation between increasing hallucinations and decoding strategies like top-k sampling that increase generation diversity.
In the realm of artificial intelligence (AI) and video analytics, advancements are continuously reshaping the landscape of surveillance, security, and data analysis. However, as AI systems become more sophisticated, there’s a growing need to address potential challenges, including the reflection of hallucinations in video analytics.
One potential avenue where hallucinations could influence AI video analytics is through the interpretation of ambiguous or incomplete visual data. AI algorithms rely on pattern recognition and machine learning to identify objects, behaviours, and anomalies within video feeds. However, in situations where the visual input is unclear or distorted, AI systems may generate erroneous interpretations, akin to hallucinations in humans.
Furthermore, the limitations of AI in understanding contextual information and cues could contribute to hallucination-like errors in video analytics. Without a comprehensive understanding of the broader context surrounding a scene, AI algorithms may misinterpret visual stimuli, leading to erroneous conclusions or false alarms.
IntelexVision addresses the challenge of hallucinations in AI video analytics through a comprehensive approach. Firstly, rigorous testing and validation procedures are implemented to assess the robustness of our AI models against ambiguous or distorted visual input. This involves subjecting the algorithms to diverse scenarios, including those that simulate hallucinatory effects, to identify and rectify vulnerabilities proactively. Additionally, our AI systems are equipped with context-awareness mechanisms to interpret visual data within the broader context of a scene, minimising the risk of misinterpretations. Continuous monitoring and feedback loops are integral to our process, enabling us to detect and correct deviations from expected behaviour promptly. By prioritising transparency and accountability in our development practices, IntelexVision ensures that our AI-driven video analytics solutions deliver reliable and trustworthy results, even in complex and dynamic environments.