The increasing use of AI facial recognition technology in UK stores, aimed at identifying repeat shoplifters, has raised concerns about the spread of "airport-style security" and its impact on privacy rights, according to a human rights group. Facewatch, a UK surveillance company, has witnessed exponential demand for its product due to the rise in incidents of shoplifting and violence in retail establishments, reports CNN on July 15. The system operates by capturing video footage of customers entering the store and cross-referencing it with known offenders in their database. Store managers receive alerts on their phones when identified individuals attempt to re-enter the store, enabling them to take appropriate action.
Critics argue that this technology normalises invasive security measures for everyday activities like shopping, infringing upon individuals' right to privacy. Recording shoppers' biometric data is likened to demanding fingerprints or DNA samples for entry into a store. Concerns also extend to the potential for bias and errors in AI-powered software, leading innocent individuals to be wrongly flagged as offenders. While Facewatch claims a high accuracy rate, mistakes can occur, albeit infrequently, and erroneous records are promptly deleted upon notification. Facewatch's compliance with UK laws includes notifying customers of the technology's use and retaining data for only two weeks, half the duration of conventional CCTV cameras.
The founder of Facewatch emphasises the system's intention to prevent crime and ensure the safety of retail staff. Rising levels of violence, abuse, and theft in stores have driven the demand for such security measures. The economic impact of theft, particularly by repeat offenders, can be significant for businesses already coping with the rising cost of living. However, critics caution against rushing to AI-driven solutions for complex issues like shoplifting and stress the potential biases and societal implications associated with widespread implementation.
Simon Gordon, the founder of Facewatch, initially created the system to assist law enforcement but turned to the private sector when faced with limited police response. The system's success has attracted interest from businesses worldwide, prompting plans for expansion into the United States. Governments are concurrently working on regulations to govern the use of AI technology, acknowledging the need for responsible implementation. The European Parliament recently agreed to ban real-time, AI-powered facial recognition technology in public spaces, reflecting the growing attention to addressing potential ethical concerns.
Comments (0)