Gurman: Apple Watch Series 7 Will Be Available in Limited Quantities At Launch

Technology

MacRumors 06 September, 2021 - 01:49am 43 views

When is Apple's September event?

Apple Event September 15, 2020. apple.comApple Events

When will Iphone 13 be announced?

Before 2020, Apple has often announced its new iPhones on either the first or second Tuesday of September. If that happens again in 2021, we'd expect the iPhone 13 range to be revealed on September 7 or September 14 and release 10 days later. TechRadariPhone 13 release date, leaks, price and news

This week’s top stories: iPhone 13 rumors, Apple Watch Series 7 delays, and more

9to5Mac 06 September, 2021 - 10:50am

By Chris Ciaccia and Jonathan Chadwick For Mailonline

Apple has indefinitely delayed the roll-out of controversial child safety features following a furious backlash from its users.

The contentious plans, revealed by the tech giant on August 5, involve scanning iPhones for child abuse images and reporting 'flagged' owners to the police. 

It had planned to rollout the feature for iPhones, iPads, and Mac with software updates later this year in the US. 

But Apple said on Friday it would take more time to collect feedback and improve the proposed features, after the criticism of the system on privacy and other grounds both inside and outside the company.

However, child protection agencies have expressed their disappointment regarding Apple's decision today, with one criticising the assumption that 'child safety is the trojan horse for privacy erosion'.

 Apple has indefinitely delayed its plans for features intended to help protect children from predators

The new features, which will come with iOS 15, iPadOS 15, watchOS 8 and macOS Monterey later this year, will allow Apple to: 

1. Flag images to the authorities after being manually checked by staff if they match images already flagged as child sexual abuse images by the US National Center for Missing and Exploited Children 

2. Scan images that are sent and received by minors in the Messages app. If nudity is detected, the photo will be automatically blurred and the child will be warned that the photo might contain private body parts 

3. Allow Siri to 'intervene' when users try to search topics related to child sexual abuse 

4. Notify parents If a child under the age of 13 sends or receives a suspicious image, if the child's device is linked to Family Sharing.  

'Previously we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them and to help limit the spread of child sexual abuse material [CSAM],' it says. 

'Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.' 

Apple plans to automatically scan iPhones and cloud storage for child abuse images and report 'flagged' owners to the police after a company employee has looked at their photos.

The new safety tools will also be used to look at photos sent by text messages to protect children from 'sexting', automatically blurring images Apple's algorithms could detect as CSAM. 

The iPhone maker said last month that the detection tools had been designed to protect user privacy and wouldn't allow the tech giant to see or scan a user's photo album. 

Instead, the system will look for matches, securely on the device, based on a database of 'hashes' – a type of digital fingerprint – of known CSAM images provided by child safety organisations.

As well as looking for photos on the phone, cloud storage and messages, Apple's personal assistant Siri will be taught to 'intervene' when users try to search topics related to child sexual abuse.     

The new tools were set to be introduced later this year as part of the iOS and iPadOS 15 software update due in the autumn.

They were initially set to be introduced in the US only, but with plans to expand further over time. 

Critics had argued the entire set of tools could be exploited by repressive governments looking to find other material for censorship or arrests.

If and when implemented, it would also be impossible for outside researchers to check whether Apple was only checking a small set of on-device content.        

Apple's plans sparked a global backlash from a wide range of rights groups, with employees also criticising the plan internally.

Greg Nojeim of the Center for Democracy and Technology in Washington DC said: 'Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship.' 

Using 'hashes' or digital fingerprints, images in a CSAM database will be compared to pictures on a user's iPhone. Any match would then sent to Apple and, after being reviewed by a human, on to the National Center for Missing and Exploited Children

The new image-monitoring feature is part of a series of tools heading to Apple mobile devices. 

1. User's photos are compared with 'fingerprints' from America's National Center for Missing and Exploited Children (NCMEC) from its database of child abuse videos and images that allow technology to detect them, stop them and report them to authorities. 

Those images are translated into 'hashes', a type of code that can be 'matched' to an image on an Apple device to see if it could be illegal.

2. Before an iPhone or other Apple device uploads an image to iCloud, the 'device creates a cryptographic safety voucher that encodes the match result. It also encrypts the image's NeuralHash and a visual derivative. This voucher is uploaded to iCloud Photos along with the image.  

3. Apple's 'system ensures that the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content,' Apple has said.  

At the same time Apple's texting app, Messages, will use machine learning to recognise and warn children and their parents when receiving or sending sexually explicit photos, Apple said.

'When receiving this type of content, the photo will be blurred and the child will be warned,' Apple said.

'As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it.'

Similar precautions are triggered if a child tries to send a sexually explicit photo, according to Apple. Personal assistant Siri, meanwhile, will be taught to 'intervene' when users try to search topics related to child sexual abuse, according to Apple.

4. Apple says that if their 'voucher' threshold is crossed and the image is deemed suspicious, its staff 'manually reviews all reports made to NCMEC to ensure reporting accuracy'

Users can 'file an appeal to have their account reinstated' if they believe it has been wrongly flagged. 

5. If the image is a child sexual abuse image, NCMEC can report it to the authorities with a view to a prosecution.

Muffett raised concerns the system will be deployed differently in authoritarian states, asking 'what will China want [Apple] to block?' 

Matthew Green, a top cryptography researcher at Johns Hopkins University, also warned that the system could be used to frame innocent people by sending them seemingly innocuous images designed to trigger matches for child pornography. 

That could fool Apple's algorithm and alert law enforcement. 

'Researchers have been able to do this pretty easily,' Green said of the ability to trick such systems.

Other abuses could include government surveillance of dissidents or protesters. 'What happens when the Chinese government says, "Here is a list of files that we want you to scan for",' Green asked. 

'Does Apple say no? I hope they say no, but their technology won't say no.'  

'This will break the dam — governments will demand it from everyone,' Green said. 

'The pressure is going to come from the UK, from the US, from India, from China. I'm terrified about what that's going to look like', he told WIRED

Ross Anderson, professor of security engineering at Cambridge University, branded the plan 'absolutely appalling'. 

'It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of our phones and laptops', he said. 

However, other experts welcomed Apple's plans. Dr Rachel O’Connell, founder and CEO of verification consultancy Trust Elevate, called Apple’s child protections proposal 'a scalable solution that does not break encryption'.

'[It] respects user privacy while at the same time significantly bearing down on certain types of criminal behaviour, in this case terrible crimes which harm children,' she said. 

'The idea that child safety is the trojan horse for privacy erosion is a trope that privacy advocates expound. 

'This creates a false dichotomy and shifts the focus away from the children and young people at the front line of dealing with adults with a sexual interest in children, which often engage in grooming children and soliciting them to produce child sexual abuse material.'   

Meanwhile, Andy Burrows, the head of child safety online policy at NSPCC, called Apple's decision 'an incredibly disappointing delay'. 

'Apple were on track to roll out really significant technological solutions that would undeniably make a big difference in keeping children safe from abuse online and could have set an industry standard,' he said. 

'They sought to adopt a proportionate approach that scanned for child abuse images in a privacy preserving way, and that balanced user safety and privacy. 

Apple previously said: 'We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM)' 

'We hope Apple will consider standing their ground instead of delaying important child protection measures in the face of criticism.' 

Apple had been playing defence on the plan for weeks, and had already offered a series of explanations and documents to show that the risks of false detections were low. 

Apple boasted that 'the likelihood that the system would incorrectly flag any given account is less than one in one trillion per year'. 

Craig Federighi, Apple's senior vice president of software engineering, told The Wall Street Journal in August that the AI-driven program will be protected against misuse through 'multiple levels of auditability'. 

'We, who consider ourselves absolutely leading on privacy, see what we are doing here as an advancement of the state of the art in privacy, as enabling a more private world,' Federighi said. 

The system will look for matches, securely on the device, based on a database of 'hashes' - a type of digital fingerprint - of known CSAM images provided by child safety organizations. 

These fingerprints do not search for identical child abuse images because paedophiles would only have to crop it differently, rotate it or change colours to avoid detection.

As a result the technology used to stop child abuse images will be less rigid, making it more likely flag perfectly innocent files. 

In the worst cases the police could be called in and disrupt the life or job of the person falsely accused just, perhaps, for sending a picture of their own child.

The program currently only looks at photos and videos, but there are concerns that the tech could be used to scan usually encrypted messages 

There are major concerns that the digital loophole could be adapted by an authoritarian government to enforce other crimes and infringe human rights.

For example, in countries where homosexuality is illegal, private photos could be used against an individiual them in court, experts warn.

This new Apple policy will be looking at photos and videos, but there are concerns that the technology could be used to allow companies to see usually encrypted messages such as iMessage or WhatsApp.  

There are concerns that somebody could send someone a perfectly innocent photograph to someone - knowing it will get them in trouble,

If the person, or government, has a knowledge of the algorithm or 'fingerprint' being used, they could use it to fit someone up for a crime.

If the policy is rolled out worldwide, privacy campaigners fear that the tech giants will soon be allowed unfettered access to files via this backdoor.  

The comments below have not been moderated.

The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.

By posting your comment you agree to our house rules.

Do you want to automatically post your MailOnline comments to your Facebook Timeline?

Your comment will be posted to MailOnline as usual.

Do you want to automatically post your MailOnline comments to your Facebook Timeline?

Your comment will be posted to MailOnline as usual

We will automatically post your comment and a link to the news story to your Facebook timeline at the same time it is posted on MailOnline. To do this we will link your MailOnline account with your Facebook account. We’ll ask you to confirm this for your first post to Facebook.

You can choose on each post whether you would like it to be posted to Facebook. Your details from Facebook will be used to provide you with tailored content, marketing and ads in line with our Privacy Policy.

Part of the Daily Mail, The Mail on Sunday & Metro Media Group

Technology Stories