UK government wants criminal migrants to scan their faces every day

In short The UK Home Office and Ministry of Justice want migrants with criminal convictions to scan their faces up to five times a day using a smartwatch equipped with facial recognition software .

Plans for wrist-worn face-scanning devices were discussed in a Home Office data protection impact assessment report. Officials called for “daily monitoring of those subject to immigration control”, according to the Guardian this week, and suggested that all those attendees in the UK wear fitted ankle tags or smartwatches at all times.

In May, the UK government awarded a contract worth £6million to Buddi Limited, makers of a bracelet used to monitor older people at risk of falling. Buddi appears to be responsible for developing a device capable of taking images of migrants to send to law enforcement for scanning.

Location data will also be transmitted. Up to five images will be sent each day, allowing officials to track the whereabouts of known criminals. Only foreign offenders, who have been convicted of a criminal offence, will be targeted, it is claimed. The data will be shared with the Ministry of Justice and the Ministry of the Interior, it is said.

“The Home Office still does not know how long individuals will remain under surveillance,” commented Monish Bhatia, senior lecturer in criminology at Birkbeck, University of London.

“They have provided no evidence to show why electronic monitoring is necessary or demonstrate that tags allow individuals to better comply with immigration rules. What we need are humane, non-degrading and community-based solutions. .”

Amazon’s Machine Learning Scientists have share some information about their work developing multilingual language models that can take themes and context learned in one language and apply that knowledge broadly in another language without any additional training.

For this technology demonstration, they built a system based on a 20 billion-parameter transformer, dubbed the Alexa Teacher Model or AlexaTM, and fed it with terabytes of text extracted from the Internet in Arabic, English, French, German, Hindi, Italian, Japanese. , Marathi, Portuguese, Spanish, Tamil and Telugu.

It is hoped that this research will help them add functionality to models such as those that power Amazon’s Alexa smart assistant, and that this functionality will automatically be supported in multiple languages, saving them time and effort. energy.

Talk to Meta’s AI chatbot

Meta has rolled out its latest version of its machine learning-based language model virtual assistant, Blenderbot 3, and put it on the internet for anyone to chat with.

Traditionally, this sort of thing hasn’t ended well, as Microsoft’s Tay bot showed in 2016 when web trolls figured out the correct phrase to use to get the software to pick up and repeat new words, such than Nazi sentiments.

People just like to play around with bots to make them do things that will cause controversy – or maybe even just use the software as intended and it goes off the rails on its own. Meta has prepared for this and is using the experience to try ways to block the offensive material.

“Developing continuous learning techniques also poses additional challenges, as not everyone who uses chatbots is well-meaning, and some may use toxic or otherwise harmful language that we don’t want BlenderBot 3 to emulate,” It said. “Our new research attempts to address these issues.

Meta will collect information about your browser and device through cookies if you try out the model; you can decide if you want the conversations to be recorded by the Facebook parent. Be warned, however, Meta may publish what you type in the software to a public dataset.

“We collect technical information about your browser or device, including through the use of cookies, but we only use this information to provide the tool and for analytical purposes to see how individuals interact on our website. “, he said in an FAQ.

“If we publicly post a Contributed Conversations dataset, the publicly posted dataset will not associate the Contributed Conversations with the contributor’s name, login credentials, browser or device data, or any other personally identifiable information. Please ensure that you agree with how we will use the conversation as specified below before consenting to contribute research.”

Cancellation of facial recognition bans

More US cities have passed bills allowing police to use facial recognition software after passing previous ordinances restricting the technology.

CNN reported that local authorities in New Orleans, Louisiana and the state of Virginia were among those who changed their minds about banning facial recognition. The software is risky in the hands of law enforcement, where the consequences of misidentification are detrimental. For example, technology can misidentify people of color.

These concerns, however, do not appear to have deterred officials from using such systems. Some even voted to approve its use by local police departments when they were previously against it.

Adam Schwartz, senior attorney at the Electronic Frontier Foundation, Told CNN “the pendulum has swung a little more in the direction of law and order.”

Scott Surovell, a state senator from Virginia, said law enforcement should be transparent about how they use facial recognition and there should be limits in place to mitigate harm. Police can run the software to find new leads in cases, for example, he said, but shouldn’t be able to use the data to arrest someone without first investigating.

“I think it’s important that the public have confidence in the way law enforcement does their job, that these technologies are regulated and that there is a level of transparency about their use so that people can assess for themselves whether it is accurate and/or misused,” he said. ®

Comments are closed.