Apple Plans To Scan All Your Images and Report People to the Police?


Fstoppers 10 August, 2021 - 02:00pm 16 views

Does Apple Scan your photos?

Apple's technology scans photos in your iCloud Photo library and compares them to the database. If it finds a certain number of matches (Apple has not specified what that number is), a human will review it and then report it to NCMEC, which will take it from there. VoxWhy Apple’s iOS 15 will scan iPhone photos and messages

The message features will not be activated by default on all devices; they will need to be opted into for the children’s devices as part of a family on your Apple devices. This is what Apple has to say on the functionality of the protection for children coming to the Messages app as part of IOS 15:

The Messages app will add new tools to warn children and their parents when receiving or sending sexually explicit photos. When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo. As an additional precaution, the child can also be told that to make sure they are safe, their parents will get a message if they do view it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it.

There will also be Siri warnings in place if a user tries to search for images of Child Sexual Abuse Material (CSAM). This is how Apple says these features will work:

Apple is also expanding guidance in Siri and Search by providing additional resources to help children and parents stay safe online and get help with unsafe situations. For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report. 

Siri and Search are also being updated to intervene when users perform searches for queries related to CSAM. These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.

I think these features sound like an excellent way to help protect children online. 

Finally, the most contentious feature Apple is rolling out involved the on-device scanning of all images before they are backed up on your iCloud account. The images are still encrypted, so Apple still can’t see your images. They will simply be flagged if markers on a user's image match the same markers in the database at the National Center for Missing and Exploited Children. Here’s what Apple has to say on this feature: 

New technology in iOS and iPadOS will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC).

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.

This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM. And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.

It would be hard for anyone to fault Apple for making changes to protect children online and report images of CSAM. I completely agree with iCave Dave on the handling of these types of images and content of that nature. It seems as though Apple is handling the protection of children in a considered and appropriate way. 

Personally, I’m inclined to agree with some critics of the image-scanning technology and the precedent it sets. While we would all agree that the production and sharing of CSAM images is simply wrong. The issue that comes when scanning images is when reporting users is appropriate, where should the line be drawn? Should images of drug use be flagged? Some would say they absolutely should. What about terrorism, would that be defined by the government of each territory? In the West, we’re probably okay, but other parts of the world might have different definitions of “terrorist.” Who would decide what should be reported and to whom it is reported?

I think we all agree that the types of images being discussed in this video and specifically mentioned by Apple are bad, perpetrators should be flagged, reported, and the world would be a better place if these types of images were not being produced or shared. I am yet to see anyone arguing in defense of CSAM images. However, I do believe there is a discussion to be had around any further use of this technology. What about countries where homosexuality is illegal, is it a possible future outcome that images of consenting adults doing something the government doesn’t approve of get flagged and reported? This might seem like an unlikely possibility, but with the precedent this technology sets, it is a possible eventuality.

Would governments with questionable ethics in the future be able to leverage Apple into flagging images they dictate in order to keep selling iPhones in that country? I believe, with how focused Apple currently is on customers and their privacy, it’s unlikely to be an issue anytime soon.

Google and Facebook have been scanning uploaded images for this type of content for a number of years. Apple is now going to go it on the device. Does this detract from Apple's previous statement that "privacy is a human right"?

A cynic might say that this technology is being introduced in the interest of protecting children because that’s a very difficult subject for anyone to disagree with.

What are your thoughts on Apple scanning users' images? Are critics of the technology overreacting? Should a service provider be able to check anything stored on their servers? How would you feel if Adobe started scanning images on Creative Cloud or your Lightroom library for specific image types?

Let me know in the comments, but please remember to be polite, even if you disagree with someone’s point of view.

Check out the Fstoppers Store for in-depth tutorials from some of the best instructors in the business.

Seems to me there is ample room for abuse, but something needs to be done about producers of child pornography. I am concerned about those who genuinely shoot art.

I agree but at the same time things need to be done that don't infringe on the rights of the general public.

My concern is the precedent this technology sets. If we concede that it’s acceptable for a digital service provider to check and report images on our devices, should we be concerned about who says what we are and aren’t allowed images of?

I think we all can agree that child pornography is wrong and that type of content shouldn't be on anyone's phones.

At the same time like what you said who's to say what other content is allowed on our private property (i.e. phones, tablets, computers). It will be very easy to adapt this type of technology to restrict freedom people's freedoms because some company doesn't like that content. We already see that happening to some degree in countries where there isn't religious freedom and even in the US where tech companies censor opposing political views.

It does raise some very interesting questions. Since the introduction of Facial Recognition and Facial Reconstruction, computers have been used to 'age' missing children to what they would appear to be months or years later.

This same technology can be used to make someone YOUNGER and a clever practitioner of post production can make Eleanore Roosevelt into Paris Hilton.

The obvious answer for those who will do this anyway is not to use the cloud or build their own network. After all, the 'Cloud' is just another computer that you don't own!

I have to transfer images, and Google Drive is the easiest to use. It's cloud-based. Of course I don't take lewd pictures of anyone, either.

Couldn't care less, if it helps put an end to Child abuse then carry on.

That’s an interesting statement. Hypothetically speaking, would you agree that it’s acceptable to infringe the rights of others if it is in the interest of child protection?

There is nothing in my phone that would attract the attention of anyone, if you have nothing to hide, what exactly is being infringed? It’s the same situation as people who moan about CCTV being installed everywhere, who cares? I’m not doing anything wrong so nothing to worry about.

In the case of this aren’t they merely trying to match binary codes of existing images that are stored on a database? And only when there is a match (much like a fingerprint) someone is tasked with reviewing the match before sending it to the authorities? I literally don’t see a problem with it.

That’s up to Apple where they choose to distribute the tech isn’t it, I’m fairly certain working with the Saudi Arabian authorities won’t be top of the priority list for example.

China and Russia might be more interested in what’s on their citizen’s devices

Again, up to Apple to choose whether to roll the tech out to those markets. And bear in mind, it’s still Apple who have control of this process, and still Apple who choose to send the info to the authorities, are you suggesting they are going to start selling this to government agencies to spy on citizens? I doubt it.

I’m suggesting that it might be possible for governments to insist Apple give access to them in return for the ability to trade in their country. The existence of this technology on-device is of concern and is a strange direction from a company who previously stated that “privacy is a human right”.

I’m interested to see how this rolls out and is it’s implemented over the coming years.

Privacy is a human right, that in my opinion you lose the minute you commit a serious crime, of which child abuse is the absolute worst.

I also think Apple will happily pull their products out of a country if that level of bribery becomes apparent. There are rumours they are pulling out of the U.K. market for far less.

So, if you lose privacy when you commit a serious crime, why bother to have a trial? Oh yeah - it's called the rule of law. Damed inconvenient to have rights and constitutional protections. Minority Report here we come.

It means they scan your photos first and then decide whether it was the right thing to do, and not decide whether breaching your privacy is right first before scanning. It also puts that decision in the hands of a private company, not the government. It sets a precedent for transparency that would be impossible to reverse. The fact that is named as a 'child protection' law is because it makes it a difficult thing to disagree with - the implication is there that if you disagree with the law then you passively support child abuse. The same thing went for the patriot act which eroded lots of previously held rights.

There's a reason everyone else gets it and you don't.

Wrong… they don’t scan the photos at all, they scan the digital fingerprint created by the hash, then compare it to a database of previously ceased/illegal images to check for matches, if a match is found, then they decide what to do after reviewing it. If there is no match nothing is done.

Please don’t miss out 80% of the process just so you can come here and tell me I ‘don’t get it’, are you people seriously that bad at taking in information? Or are you just trolling?

I absolutely have no issue with identifying child porn and reporting it to authorities as it is part of US legal system.

Then I will be absolutely OK with identifying adult porn and reporting it to authorities of other countries where it is illegal.

Then I will be welcoming my phone to check my content for presence of some opposing political views and report it to some other country’s officials (their market is HUUUGE and important for Apple and authorities already know Apple can do that).

Then my phone will be scanned for photos of women with their face not covered, Apple can do that, you know…

In order to scan the 'digital finger print' and compare it, they need to have access to your private files without your explicit consent. The point is they are still able to review it whether you say they can or not.

Perhaps take a step back and actually take in what people are telling you, instead of repeatedly insisting you are the smartest man in the room. You are being told by multiple people that you do not understand it, maybe there is a reason for that.

It’s encrypted and created as a code using the image data, that encrypted code is then compared to an existing database of images that have been ceased by authorities. If there is a match in those codes, the image is than reviewed.

How exactly are they ‘accessing your files’? It’s merely comparing a database of numbers, if there is no match then literally nothing else happens, if there is a match then a flag is raised, what exactly are you so concerned about?

Stop pretending that just because other people are making this into an issue that I’m automatically wrong, you clearly haven’t read the process and are choosing instead to parrot some nonsense about them invading your privacy, when they aren’t.

You're right, I really do not have enough crayons to explain this to you.

It’s really simple, if you don’t this then just don’t buy an Apple product… I’d rather they performed this scan and weeded out a few thousand nonces than sit on the internet crying about my supposed privacy being violated.

It’s sometimes good to see past the end of your own nose, but it’s now apparent you are unable to do so.

Your use of the term 'supposed privacy' and the lack of self-awareness in that last sentence are incredibly telling.

This makes zero sense, what exactly have I done wrong by seeing how this can produce a positive outcome? Please explain to me oh wise one, exactly what is so wrong with me not having a problem with it?

What am I standing to lose? Why would some random photographer from Leeds be so much more knowledgeable about this than a huge international company? Why are other large tech companies already doing this if it’s so morally wrong?

Stop making pathetic sarcastic remarks and tell me exactly what I as a normal person, stands to lose by this being rolled out.

There is nothing inherently wrong with seeing the positive, but it's not exactly a balanced view when you flat out refuse to see the negative. As you are doing, repeatedly. I understand the desire for a move against CSAM, but that doesn't mean that any direction towards it is a positive one.

Well since you asked, my background is within the ISTAR (Intelligence, Surveillance, Target-Acquisition and Reconnaissance) community in the UK armed forces. I have seen first-hand how civil rights being eroded by private companies has massive implications on every day life in different countries around the world. Not in this way specifically (obviously), but enough that I can caution that people ought to be wary of giant tech making moves like this. There is a reason I specifically mentioned the patriot act - largely because it's well known that it had a negligible impact on the thing it was introduced to fight and instead allowed a massive creep into surveillance on the general public. The patriot act made gains on similar (but less extensive) laws that have gradually been introduced over the years.

Given that Apple exist in countries other than the UK or the US, the technology could have far-reaching implications given the amount of private data people keep on their devices and the level of access countries like China demand of their public.

Which is fine, but they already stated they won't be manipulated by Government's to expand on the tech to do anything other than this task.

My view on the privacy and snooping, years ago I used to get up to things that could be construed as breaking the law, and therefore subject of surveillance (namely dance music and dark nightclubs without going into it too much). Nowadays I just live my life and try to be the best person I can for both myself and others, so as far as im concerned nothing I do, or is on my device has any reason for me to be concerned about being 'watched'.

I understand there are nations/regimes out there who would love to use such technology for ill gotten gains, but im 99.9% certain the very liberal people who make the decisions at Apple are not going to let this tech get anywhere near anybody who would look to violate peoples human rights. I can't imagine for example, Tim Cook, agreeing to the Hungarian government weeding out gay people by scanning their Apple devices.

If it was to get into the wrong hands for the wrong reasons then there absolutely is a negative, but seeing as they have quashed that theory, people here are just making wild assumptions, or accusations based on nothing more than opinion. In fact, reading through the comment history of some of them, its almost a cliche that they are in here with these views.

The problem with that is that there are no laws in place preventing them from changing their stance on that. Obviously, because the state would want that level of access if they could have it. What happens in five years time, when they have new directors who fancy a change in policy direction? What happens if allowing the state to have the access they pledged they wouldn't give means that they could expand in a new territory for massive profit? What happens if this doesn't quite have the impact on child abuse that they thought it would so they decide to expand their power? The only assurance we have that they won't do this, is that they have said they promise they won't do it. There are no consequences for them changing that stance or going back on their word unless consumers vote with their feet and/or vote for legislation which puts into place some hard assurances. Remember this is private tech and therefore a global issue.

There is a reason that there is such opposition to this.

Aren't other large tech companies already doing it? so Apple are just falling in line. In my view id rather trust the tech companies to have some morals than any government, from any country. You are correct that there is no law to stop them changing their stance, but there is such things as brand image etc which companies rely on to have people buy into their system, so I can't see them putting that at risk.

For Apple in particular, this just looks like another excuse to stick the boot in (something these websites and people who frequent them seem to enjoy), which is patently obvious by the wording used in the title on sites like this and Petapixel. My belief is regardless of the label people like to give Apple, their intentions are mainly well natured.

Perhaps reading this article may help you with some actual facts around the tech and what it is going to do. If you can’t see how this is a good thing then sorry, but you have some serious issues.

What is that is confusing to you? Is it the sarcasm? Is it the irony of having a company becoming the worlds largest police informant? Or is it the possibility that the software may have flaws and incorrectly report to the police? Maybe it's that you'd be happy with Apple trolling through your phone because if you have nothing to hide then you don't need privacy. (sarcasm again).

You’re making a whole lot of assumptions there that aren’t based on fact, and yes I have zero issue with this scan taking place on my devices.

Again you haven’t even read through the process, you have clearly just read the title of the article and decided to make comments that in your head, you seem to be somehow correct.

I’m done dealing with yet another moron on this website.

Ah, now the ad hominem rebuttal. Ingenious. I'm glad you like to have you phone scanned. Some people - people with nothing to hide - will still object. And yes, I did read and understand the article.

China and Russia already know what is on the phones of their citizens.

Ah yes the "I have nothing to hide" defense.

I don’t understand your point? What am I ‘defending’?

I’ve just read through your comment history…. Don’t bother replying to me, thanks.

The next step will be random warrant-less searches of people's homes.

It would seem that there are a lot of people online who are concerned that’s the likely eventuality of any form of covert surveillance

Exactly. Yes, predators need to be hunted down...but how can innocent people's privacy be kept safe.

I saw comments about this from people who were worried that who governs how strict the algorithm is or not? For instance if a parent has photos of their kids in bathing suits will they have to worry if the police will be knocking on their door or their kids get temporarily taken away by CPS while the matter is investigated?

That’s a very good point. I hadn’t considered parents taking innocent images of their own children, at what age is a nude child indecent? And who decides what is a parent documenting their child growing and what is more nefarious?

If those parents photos of their kids in bathing suits aren’t stored on a database of already ceased images that are needed for a match, they wont be getting a knock on the door.

It seems these people commenting are only reading half the info then making assumptions.

No-one has issue with what Apple are doing with regard to child protection. The technology and potential future misuse or misunderstanding is a concern

Anyone concerned by this has a serious lack of understanding around just how much we as citizens can be surveilled by the authorities if they need to. It takes the police what, 20 minutes to obtain phone records at a car accident to see if someone was texting whilst driving etc?

There is no privacy when it comes to technology so you have to either suck it up or stop using it if it bothers you, me personally, I don’t care.

Have a mortgage, library card. apartment lease, internet account, cell phone just to name a few? Well past the age of privacy and well into being surveiled every second both inside and outside of our residence.

Same with just stepping outside of your house. There are cameras everywhere. Some countries track how much gasoline you put in your vehicle.

I am in DC frequently - but was surprised to learn that Philedelphia had more cameras per capita.

Read full article at Fstoppers

Here’s why Apple’s new child safety features are so controversial

The Verge 12 August, 2021 - 07:10am

Encryption and consumer privacy experts break down Apple’s plan for child safety

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Apple claims it designed what it says is a much more private process that involves scanning images on your phone. And that is a very big line to cross — basically, the iPhone’s operating system now has the capability to look at your photos and match them up against a database of illegal content, and you cannot remove that capability. And while we might all agree that adding this capability is justifiable in the face of child abuse, there are huge questions about what happens when governments around the world, from the UK to China, ask Apple to match up other kinds of images — terrorist content, images of protests, pictures of dictators looking silly. These kinds of demands are routinely made around the world. And until now, no part of that happened on your phone in your pocket.

I think for a company with as much power and influence as Apple, rolling out a system that changes an important part of our relationship with our personal devices deserves thorough and frequent explanation. I hope the company does more to explain what it’s doing, and soon.

Jen King: Thanks for having us.

Riana Pfefferkorn: Thank you.

RP: My name is Riana Pfefferkorn. I’m a research scholar at the Stanford Internet Observatory. I’ve been at Stanford in various capacities since late 2015, and I primarily focus on encryption policies. So this is really a moment in the sun for me, for better or for worse.

JK: I am a fellow on privacy and data policy at the Stanford Institute for Human-Centered Artificial Intelligence. I’ve been at Stanford since 2018, and I focus primarily on consumer privacy issues. And so, that runs the gamut across social networks, AI, you name it. If it involves data and people and privacy, it’s kind of in my wheelhouse.

JK: It doesn’t really raise any red flags for me, I don’t know about you, Riana.

RP: This seems like something that I’m not sure if this was part of their initial announcement, or if they’d hurriedly added this after the fact, once people started critiquing them or saying, oh my God, this is going to have such a terrible impact on trans and queer and closeted youth.

As it stands, I don’t think it’s controversial, I just am not convinced that it’s going to be all that helpful. Because what they are saying is, if you ask Siri, “Siri, I’m being abused at home, what can I do?” Siri will basically tell you, according to their documentation, go report it somewhere else. Apple still doesn’t want to know about this.

Note that they are not making any changes to the abuse reporting functionality of iMessage, which, as I understand it, is limited basically to like, spam. They could’ve added that directly in iMessage, given that iMessage is the tool where all of this is happening. Instead, they’re saying, if you just happen to go and talk to Siri about this, we will point you to some other resources that are not Apple.

JK: I think so. I say that with a small hesitation, because I am not sure, and Riana may know the answer to this. where they’re doing that real-time scanning to determine whether the image itself, how much, I guess — the proportion of skin it probably contains. I assume that’s happening on the client side, on the phone itself. And I don’t know if Riana has any particular concerns about how that’s being done.

Most of the criticisms I’ve heard raised about this are some really good normative questions around what type of family and what type of parenting structure does this really seek to help? I’m a parent, I have my kid’s best interests at heart. But not every family operates in that way. And so I think there’s just been a lot of concerns that just assuming that reporting to parents is the right thing to do won’t always yield the best consequences for a wide variety of reasons.

RP: So, yeah — their documentation is clear that they are analyzing images on the device, and I know that there has been some concern that because it’s not transparent from their documentation exactly how this is happening, how accurate is this image analysis going to be. What else is going to get ensnared in this, that might not actually be as accurate as Apple is saying it’s going to be? That’s definitely a concern that I’ve seen from some of the people who work on the issue of trying to help people who have been abused, in their family life or by intimate partners.

And it’s something that honestly, I don’t understand the technology well enough, and I also don’t think that Apple has provided enough documentation to enable reasoned analysis, and thoughtful analysis. That seems to be one of the [things] they’ve tripped over, is not providing sufficient documentation to enable people to really inspect and test out their claims.

RP: This will be done on the client baked into the operating system and deployed for every iPhone running iOS 15, once that comes out around the world. But this will only be turned on within the United States at least, so far. There is going to be an on-device attempt to try and make a hash of the photos you have uploaded to iCloud Photos, and check the hash against the hash database that is maintained by the National Center for Missing and Exploited Children, or NCMEC, that contains known child sex abuse material, or CSAM for short.

There is not going to be a hash of actual CSAM on your phone. There’s not going to be a search of everything so far on your camera roll, only if [the photos] are going into iCloud photos. If you have one image that is in the NCMEC database, that will not trigger review by Apple, where they will have a human in the loop to take a look. It will be some unspecified threshold number of images that have to be triggered by their system, which is more complex than I want to try and explain.

So, if there is a collection of CSAM material sufficient to cross the threshold, then there will be the ability for a human reviewer at Apple to review and confirm that these are images that are part of the NCMEC database. They’re not going be looking at unfiltered, horrific imagery. There is going to be some degraded version of the image, so that they aren’t going to be exposed to this. Really, it’s very traumatic for people who have to review this stuff.

And then if they confirm that it is in fact, known as CSAM, then that report goes to NCMEC, pursuant to Apple’s duties under federal law, and then NCMEC will involve law enforcement.

JK: Right, although I think the use case is potentially quite different. It’s one of the interesting questions why Apple is doing this in such an aggressive and public way, given that they were not a major source of child sexual violence imagery reporting to begin with. But when you think about these different products, in the online ecosystem, a lot of what you’re seeing are pedophiles who are sharing these things on these very public platforms, even if they carve out little small spaces of them.

And so they’re usually doing it on a platform, right? Whether it’s something like Facebook, WhatsApp, Dropbox, whatever it might be. And so, yes, in that case, you’re usually uploading imagery to the platform provider, it’s up to them whether they want to scan it in real time to see what you are uploading. Does it match one of these known images, or known videos that NCMEC maintains a database of?

That they’re doing it this way is just a really interesting, different use case than what we often see. And I’m not sure if Riana has any kind of theory behind why they’ve decided to take this particular tactic. I mean, when I first heard about it, the idea that I was going to have the entire NCMEC hash database sitting on my phone — I mean, obviously, hashes are extremely small text files, so we’re talking about just strings of characters that to the human eye, it just looks like garbage, and they don’t take up a lot of memory, but at the same time, the idea that we’re pushing that to everybody’s individual devices was kind of shocking to me. I’m still kind of in shock about it. Because it’s just such a different use case than what we’ve seen before.

RP: One of the concerns that has been raised with having this kind of client-side technology being deployed is that once you’re pushing it to people’s devices, it is possible — this is a concern of researchers in this space — for people to try and reverse-engineer that, basically, and figure out what is in the database. There’s a lot of research that’s done there. There are fears on one side about, well what if something that is not CSAM gets slipped into this database?

The fear on the other side is, what if people who have really strong motivations to continue trading CSAM try to defeat the database by figuring out what’s in it, figuring out how they can perturb an image, so that it slips past the hash matching feature.

And that’s something that I think is a worry, that once this is put onto people’s devices — rather than happening server-side as currently happens with other technologies such as PhotoDNA — that you are opening up an avenue for malicious reverse engineering to try and figure out how to continue operating, unimpeded and uncaught.

I read some strident statements from the EFF (Electronic Frontier Foundation) and Edward Snowden, and others, calling this a backdoor into the iPhone. Do you think that is a fair characterization, Riana?

RP: I don’t like using the word backdoor because it’s a very loaded term and it means different things to different people. And I don’t know that I agree with that because this is all still happening on the client. Right? Apple is very careful to not mention that there are end-to-end encryption for iMessage. And I agree gives an insight into what people are doing on their phone that was not there before. But I don’t know whether that means that you could characterize it as a backdoor.

I’ve heard a lot of people talking about, like, “Does this mean it’s not end-to-end encryption anymore? Does this mean it’s a backdoor?” I don’t care. I don’t care what we’re calling it. That’s a way of distracting from the main things that we’re actually trying to talk about here, which I think are: what are the policy and privacy and free expression data security impacts that will result from Apple’s decision here? And how will that go out beyond the particular CSAM context? And will what they’re doing work to actually protect children better than what they’ve been doing to date? So quibbling over labels is just not very interesting to me, frankly.

JK: There’s a couple things here. One is that you could take the position that Apple’s being extremely defensive here and saying, essentially, “Hey, pedophile community, we don’t want you here, so we’re going to, in a very public way, work to defeat your use of our products for that purpose.” Right? And that might be quite effective.

I want to actually add a little context here for why I’m in this conversation. Before I worked in academia, I used to work in [the tech] industry. I worked for about two years building a tool to review CSAM material and detect it. And when I worked on this project, it was very clear from the beginning that the goal was to get it off the servers of the company I was working for. Like — there was no higher goal. We were not going to somehow solve the child pornography problem.

That’s where I have a particular insight. One of the reasons Apple could be taking this stand could be a moral issue — it could be that they’ve decided that they just simply do not want their products associated with this type of material, and in a very public way they’re going to take a stand against it. I think you’re right. I think that there are people for whom, if you’re going to get caught using an Apple product, it’s probably because you weren’t necessarily well-versed in all the ways to try to defeat this type of thing.

[But] I think it’s really important to remember [that] when you talk about these issues and you think about this group of people, that they are a community. And there are a lot of different ways that you can detect this content. I would feel a lot better about this decision if I felt like what we were hearing is that all other methods have been exhausted, and this is where we are at.

And I am in no way of the belief that all other methods have been exhausted, by Apple or by kind of the larger tech community et al, who I think has really failed on this issue, given I worked on it from 2002 to 2004 and it’s gotten tremendously worse since that time. A lot more people have joined the internet since then, so it is kind of a question of scale. But I would say industry across the board has really been bad at really trying to defeat this as an issue.

JK: It’s important to understand that this is a community of users, and different communities use different products in different ways. When you’re in product design, you’re designing a product with particular users in mind. You kind of have your optimal user groups that you want to privilege the product for, who you want to attract, how you want to design the features for.

The kind of work I did to try to understand this community, it became very clear that this group of users know what they’re doing is illegal. They don’t want to get caught, and they use things very materially different than other users. And so if you’re willing to put in the time to understand how they operate and put in the resources to detect them, and to really see how they differ from other users — because they don’t use these products the same way that you and I probably do. Right? They’re not loading up photos to share with friends and family. They’re operating under subterfuge. They know what they’re doing is highly illegal.

There’s often a great deal of pressure in terms of timing, for example. One of the things I witnessed in the work I did was that people often would create accounts and basically have an upload party. They would use the service at an extremely high rate for an extremely short amount of time and then ditch it, ditch whatever product they were working in. Because they knew that they only had a limited amount of time before they would get caught.

To just assume that you can’t potentially put in more work to understand how these people use your product, and that they may be detectable in ways that don’t require the types of work that we’re seeing Apple do — if I had more reassurance they’d actually kind of done that level of research and really exhausted their options I would probably feel more confident about what they’re doing.

I don’t want to just point the finger at Apple. I think this is an industry-wide problem, with a real lack of devotion to resources behind it.

RP: The trouble with this particular context is how extremely unique CSAM is compared to any other kind of abusive content that a provider might encounter. It is uniquely opaque in terms of how much outside auditability or oversight or information anybody can have.

I mentioned earlier that there’s a risk that people might be able to try and reverse-engineer what’s in the database of hashed values to try and figure out how they could subvert and sneak CSAM around the database.

The other thing is that it’s hard for us to know exactly what it is that providers are doing. As Jen was saying, there’s a bunch of different techniques that they could take and different approaches that they can employ. But when it comes to what they are doing on the backend about CSAM, they are not very forthcoming because everything that they tell people to explain what it is they’re doing is basically a roadmap to the people who want to abuse that process, who want to evade it.

So it is uniquely difficult to get information about this on the outside, as a researcher, as a user, as a policymaker, as a concerned parent, because of this veil of secrecy that hangs over everything to do with this whole process, from what is in the database, to what are different providers doing. Some of that sometimes comes out a little bit in prosecutions of people who get caught, by providers, for uploading and sharing CSAM on their services. There will be depositions and testimony and so forth. But it’s still kind of a black box. And that makes it hard to critique the suggested improvements, to have any kind of oversight.

And that’s part of the frustration here, I think, is that it’s very difficult to, say, “You just have to trust us and trust everything all the way down from every point, from NCMEC on down,” and simultaneously, “Just know that what we’re doing is not something that has other collateral harms,” because for anything outside of CSAM, you have more ambiguity and legitimate use cases and context where it matters.

When it comes to CSAM, context does not matter. Something that I’ve been saying in recent days is: there’s no fair use for CSAM the way that there is for using copyrighted work. There’s this lack of information that makes it really difficult for folks like Jen or me or other people in civil society, other researchers, to be able to comment. And Jen, I’m so glad that you have this background, that you at least have both the privacy and the understanding from working on this from the provider’s side.

RP: They certainly learned that they won’t get any plaudits for that. You’ve identified that. This might be a point where they say other organizations scan using PhotoDNA in the cloud, and they do so over email. And I don’t know how well understood that is by the general public, that, for most of the services that you use, if you are uploading photos, they are getting scanned to look for CSAM for the most part. If you’re using webmail, if you’re using a cloud storage provider — Dropbox absolutely does.

But you’re right that they are not necessarily that forthcoming about it in their documentation. And that’s something that might kind of redound to the benefit of those who are trying to track and catch these offenders, is that there may be some misunderstanding or just lack of clarity about what is happening. That trips up people who trade in this stuff and share and store this stuff because they don’t realize that.

I guess there’s almost some question about whether Apple is kind of ensuring that there will be less CSAM on iCloud Photos three months from now than there is today, because they’re being more transparent about this and about what they are doing.

JK: There is a really complicated relationship here between the companies and law enforcement that I think bears mentioning, which is that, the companies, broadly, are the source of all this material. You know? Hands down. I don’t even know if you see offline CSAM these days. It’s all online, and it’s all being traded on the backs of these large organizations.

Holding CSAM is illegal. Every copy the platforms hold is a felony, essentially, a criminal felony. At the same time that they are the source of this material and law enforcement wants to crack down, law enforcement needs the platforms to report it. So there’s this tension at play that I think is not necessarily well understood from the outside.

There’s a bit of a symbiotic relationship here where, if the companies crack down too much and force it all off their services, it all ends up on the dark web, completely out of the reach of law enforcement without really heavy investigative powers. In some ways, that disadvantages law enforcement. One could argue that they need the companies to not crack down so much that it completely disappears off their services because it makes their job much harder. So there is a very weird tension here that I think needs to be acknowledged.

RP: I view this as a paradigm shift, to take where the scanning is happening from in the cloud, where you are making the choice to say, “I’m going to upload these photos into iCloud.” It’s being held in third parties’ hands. You know, there’s that saying that “it’s not the cloud; it’s just somebody else’s computer,” right?

You’re kind of assuming some level of risk in doing that: that it might be scanned, that it might be hacked, whatever. Whereas moving it down onto the device — even if, right now, it’s only for photos that are in the cloud — I think is very different and is intruding into what we consider a more private space that, until now, we could take for granted that it would stay that way. So I do view that as a really big conceptual shift.

Not only is it a conceptual shift in how people might think about this, but also from a legal standpoint. There is a big difference between data that you hand over to a third party and assume the risk that they’re going to turn around and report to the cops, versus what you have in the privacy of your own home or in your briefcase or whatever.

I do view that as a big change.

JK: I would add that some of the dissonance here is the fact that we just had Apple come out with the “asks apps to not track” feature, which was already in existence before, but they actually made that dialog box prominent to ask you when you were using an app if you want the app to track you. It seems a bit dissonant that they just rolled out that feature, and then suddenly, we have this thing that seems almost more invasive on the phone.

But I would say, as someone who’s been studying privacy in the mobile space for almost a decade, there is already an extent to which these phones aren’t ours, especially when you have third-party apps downloading your data, which has been a feature of this ecosystem for some time. This is a paradigm shift. But maybe it’s a paradigm shift in the sense that we had areas of the phone that we maybe thought were more off-limits, and now they are less so than they were before.

The illusion that you’ve been able to control the data on your phone has been nothing more than an illusion for most people for quite a while now.

RP: It’s a great point because there are a number of people who are kind of doing the equivalent of “If the election goes the wrong way, I’m going to move to Canada” by saying “I’m just going to abandon Apple devices and move to Android instead.” But Android devices are basically just a local version of your Google Cloud. I don’t know if that’s better.

And at least you can fork Android, [although] I wouldn’t want to run a forked version of Android that I sideloaded from some sketchy place. But we’re talking about a possibility that people just don’t necessarily understand the different ways that the different architectures of their phones work.

A point that I’ve made before is that people’s rights, people’s privacy, people’s free expression, that shouldn’t depend upon a consumer choice that they made at some point in the past. That shouldn’t be path-dependent for the rest of time on whether or not their data that they have on their phone is really theirs or whether it actually is on the cloud.

But you’re right that, as the border becomes blurrier, it becomes both harder to reason about these things from arm’s length, and it also becomes harder for just average people to understand and make choices accordingly.

JK: Privacy shouldn’t be a market choice. I think it’s a market failure, for the most part, across industry. A lot of the assumptions we had going into the internet in the early 2000s was that privacy could be a competitive value. And we do see a few companies competing on it. DuckDuckGo comes to mind, for example, on search. But bottom line, privacy shouldn’t be left up to... or at least many aspects of privacy shouldn’t be left up to the market.

RP: I have heard that idea from someone else I talked to about this and mentioned it to my colleague at SIO, Alex Stamos. Alex is convinced that this is a prelude to announcing end-to-end encryption for iCloud later on. It seems to be the case that, however it is that they are encrypting iCloud data for photos, that they have said it is “too difficult to decrypt everything that’s in the cloud, scan it for CSAM, and do that at scale.” So it’s actually more efficient and, in Apple’s opinion, more privacy-protective, to do this on the client side of the architecture instead.

I don’t know enough about the different ways that Dropbox encrypts their cloud, that Apple encrypts their cloud, that Microsoft encrypts its cloud, versus how iCloud does it, to know whether Apple is in fact doing something different that makes it uniquely hard for them to scan in the cloud the way that other entities do. But certainly, I think that looming over all of this is that there has been several years’ worth of encryption files, not just here in the US, but around the world, primarily focused in the last couple of years on child sex abuse material. Prior to that, it was terrorism. And there’s always concerns about other types of material as well.

One thing that’s a specter looming over this move by Apple is that they may see this as something where they can provide some kind of a compromise and hopefully preserve the legality of device encryption and of end-to-end encryption, writ large, and maybe try and rebuff efforts that we have seen, including in the US, even just last year, to effectively ban strong encryption. This might be, “If we give an inch, maybe they won’t take a mile.”

RP: This is my primary concern. The direction I think this is going is that we don’t have, ready to go, hashed databases or hashes of images of other types of abusive content besides CSAM, with the exception of terrorist and violent extremist content. There is a database called GIFCT that is an industry collaboration, to collaboratively contribute imagery to a database of terror and violent extremist content, largely arising out of the Christchurch shooting a few years back, which really woke up a new wave of concern around the world about providers hosting terrorists and violent extremist material on their services.

So my prediction is that the next thing that Apple will be pressured to do will be to deploy the same thing for GIFCT as they are currently doing for the NECMC database of hashes of CSAM. And from there on, I mean, you can put anything you’d like into a hashed image database.

Apple just said, “If we’re asked to do this for anything but CSAM, we simply will not.” And, that’s fine, but why should I believe you? Previously, their slogan was, “What happens on your iPhone stays on your iPhone.” And now that’s not true, right?

They might abide by that, where they think that the reputational trade off is not worth the upside. But if there’s a distinction with choices between either you implement this hashed database of images that this particular government doesn’t like, or you lose access to our market, and you will never get to sell a Mac or an iPhone in this country again? For a large enough market, like China, I think that they will fold.

India is one place that a lot of people have pointed to. India has a billion people. They actually are not that big of a market for iPhones, at least commensurate with the size of the market that currently exists in China. But the EU is. The European Union is a massive market for Apple. And the EU just barely got talked off the ledge from having an upload filter mandate for copyright-infringing material pretty recently. And there are rumblings that they are going to introduce a similar plan for CSAM at the end of this year.

For a large enough market, basically, it’s hard to see how Apple, thinking of their shareholders, not just of their users’ privacy or of the good of the world, continues taking that stand and says, “No, we’re not going to do this,” for whatever it is they’re confronted with. Maybe if it’s lese majeste laws in Thailand that say, “You are banned from letting people share pictures of the king in a crop top” — which is a real thing — maybe they’ll say, “Eh, this market isn’t worth the hit that we would take on the world stage.” But if it’s the EU, I don’t know.

RP: I think that there are absolutely a lot of folks that you could talk to who would quietly admit that they might think — if this really did get limited only ever to CSAM for real — that that might be a compromise that they could live with. Even though we’re talking about moving surveillance down into your device. And, really, there’s no limitation on them for only doing this for iCloud photos. It could be on your camera roll next. If we really believe that this would not move beyond CSAM, there are a lot of folks who might be happy with that trade-off.

Going back to your question about what a backstop might be, though, to keep it from going up beyond CSAM, this goes back to what I mentioned earlier about how CSAM is really unique among types of abuse. And once you’re talking about literally any other type of content, you’re necessarily going to have an impact on free expression, values on news, commentary, documentation of human rights abuses, all of these things.

And that’s why there’s already a lot of criticism of the GIFCT database that I mentioned, and why it would be supremely difficult to build out a database of images that are hate speech, whatever that means. Much less something that is copyright infringing. There is nothing that is only ever illegal and there’s no legitimate context, except for CSAM.

So I think that this is a backstop that Apple could potentially try to point to. But just because it would trample free expression and human rights to deal with this for anything else — I don’t necessarily know that that’s something that’s going to stop governments from demanding it.

JK: I would argue that your example points to one of the easiest examples of that whole genre, and that it’s much harder from those extreme examples to work backwards to “what is terrorism” versus “what are groups engaging in rightful protests on terrorism-related issues,” for example? The line-drawing becomes much, much harder.

To kind add some context to what Riana was saying, we are very much talking about the US and the fact that this content is illegal in the US. In Europe, those boundaries, I think, are much broader because they’re not operating under the First Amendment. I’m not a lawyer, so I’m definitely speaking a little bit outside my lane, but there isn’t the same free speech absolutism in the EU because they don’t have the First Amendment we have here in the US. The EU has been much more willing to try to draw lines around particular content that we don’t do here.

RP: I think that there are different regimes in different countries for the protection of fundamental rights that look a little different from our Constitution. But they exist. And so, when there have been laws or surveillance regimes that would infringe upon those, there are other mechanisms, where people have brought challenges and where some things have been struck down as being incompatible with people’s fundamental rights as recognized, in other countries.

And it’s very difficult to engage in that line-drawing. I have a side hustle talking about deepfakes. There is absolutely a lot of interest in trying to figure out, okay, how do we keep mis- and disinformation from undermining democracy, from hurting vaccine rollout efforts, and also from having deepfakes influence an election. And it would be real easy — this is what law professors Danielle Citron and Bobby Chesney call “the liar’s dividend” — for a government that does not like evidence of something that actually happened, something that is true and authentic but inconvenient for them, to say, “That’s fake news. That is a deepfake. This is going in our database of hashes of deepfakes that we’re gonna make you implement in our country.”

So there’s all of these different issues that get brought up on on the free expression side once you’re talking about anything other than child sex abuse material. Even there, it takes a special safe harbor under the federal law that applies to make it okay for providers to have this on their services. As Jen was saying, otherwise that is just a felony, and you have to report it. If you don’t report it, you don’t get the safe harbor, and that provider is also a felon.

The National Center for Missing and Exploited Children is the only entity in America that is allowed to have this stuff. There are some debates going on in different places right now about whether there are legitimate applications for using CSAM to train AI and ML models. Is that a permissible use? Is that re-victimizing the people who are depicted? Or would it have an upside in helping better detect other images? Because the more difficult side of this is detecting new imagery, rather than detecting known imagery that’s in a hashed database.

So even there, that’s a really hot button issue. But it gets back to Jen’s point: if you start from the fuzzy cases and work backwards, Apple could say “We’re not going to do this for anything other than CSAM because there’s never going to be agreement on anything else other than this particular database.”

Apple has also said they are not compiling the hashed databases, the image databases themselves. They’re taking what is handed to them, with the hashes, that NCMEC provides or that other child safety groups in other countries provide. If they don’t have visibility into what is in those databases, then again, it’s just as much of a black box to them as it is to anybody else. Which has been a problem with GIFCT: we don’t know what’s in it. We don’t know if it contains human rights documentation or news or commentary or whatever. Rather than just something that everybody can agree nobody should ever get to look at ever, not even consenting adults.

RP: Well, Apple is saying one of the protections against non-CSAM uses of this is that they have a human in the loop who reviews matches, if there is a hit for a sufficiently large collection of CSAM. They will take a look and be like, “Yep, that matches the NCMEC databases.” If what they’re looking at is the Thai king in a crop top, then they can say, “What the heck? No, this isn’t CSAM.” And supposedly, that’s going to be another further layer of protection.

I think that I have already started seeing some concerns, though, about, “Well, what if there’s a secret court order that tells NCMEC to stick something in there? And then NCMEC employees have to just go along with it somehow?” That seems like something that could be happening now, given that PhotoDNA is based off of hashes that NCMEC provides even now for scanning Dropbox and whatever.

This is really highlighting how it’s just trust all the way down. You have to trust the device. You have to trust the people who are providing the software to you. You have to trust NCMEC. And it’s really kind of revealing the feet of clay that I think is kind of underpinning the whole thing. We thought our devices were ours, and Apple had taken pains during Apple v. FBI to say, “Your device is yours. It doesn’t belong to us.” Now it looks like, well, maybe the device really is still Apple’s after all, or at least the software on it.

RP: The fact that Apple rolled this out with maybe a one day’s heads up to some people in civil society orgs and maybe some media, isn’t helpful. Nobody was brought into this process while they were designing this, to tell them, “Here are the concerns that we have for queer 12-year-olds. Here are the concerns for privacy. Here are the civil liberties and the human rights concerns,” all of that. It looks like this was just rolled out as a fait accompli with no notice.

With, I have to say, really confusing messaging, given that there are these three different components and it was easy to conflate two of them and get mixed up about what was happening. That has further caused a lot of hammering and wailing and gnashing of teeth.

But if they had involved elements of civil society other than, presumably, NCMEC itself and probably law enforcement agencies, maybe some of the worst could have been averted. Or maybe they would have ignored everything that we would have said and just gone forth with the thing that they’re doing it as-is.

But, as Jen and I can tell you — Jen and I have both been consulted before by tech companies who have something that impacts privacy. And they’ll preview that for us in a meeting and take our feedback. And that’s standard practice for tech companies, at least at some points. If you don’t really care what people’s feedback is, then you roll out where you get feedback from people later and later in the process,

But if they had really wanted to minimize the free expression and privacy concerns, then they should have consulted with outsiders, even if there are voices they thought that would be “too screechy,” as the executive director of NCMEC called everybody who expressed any kind of reservation about this. Even if they didn’t want to talk to what Apple might think is somehow the lunatic fringe or whatever, they could have talked to more moderate voices. They could have talked to academics. They could have talked to me, although I’m probably too screechy for them, and at least taken those concerns back and thought about them. But they didn’t.

JK: I think image hashing match ships. I don’t know about the “nanny cam,” again, for lack of a better word.

I predict that they will double down on the CSAM image scanning for all of the different reasons we’ve talked about today. I think Riana really hit the nail on the head — I think there’s some kind of political strategizing going on behind the scenes here. If they are trying to take a bigger stand on encryption overall, that this was the piece that they had to give up to law enforcement in order to do so.

RP: I think certainly for the stuff about Siri that is uncontroversial, they’ll keep rolling that out. I’m not certain, but it seems like the iMessage stuff either wasn’t messaged clearly at the beginning, or maybe they really did change over the course of the last few days in terms of what they said they were going to do. If that’s true, and I’m not sure whether it is, that then indicates that maybe there is some room to at least make some tweaks.

However, the fact that they rolled out this whole plan as a fait accompli, that’s going to be put into iOS 15 at the very end, without any consultations, suggests to me that they are definitely going to go forward with these plans. With that said, there may be some silver lining in the fact that civil society was not consulted at any point in this process, that now, maybe there’s an opportunity to use this concerted blowback as a way to try and get pushback in that might not have been possible, had civil society been looped in all along the way, and incorporated and neutralized, almost.

So, I’m not sanguine about the odds of them just not deploying this CSAM thing at all. Don’t get me wrong, I would love to be wrong with the slippery slope arguments, that the next thing will be demanding this for GIFCT and then it’ll be not as much to say in deepfakes and copyright infringement. I would love to be proved wrong about that, even as silly as it would make me look. But I’m not sure that that’s going to be the case.

Tool that checks for child sex abuse images could be on its way to Britain

Daily Mail 12 August, 2021 - 07:10am

Last Thursday, Apple announced that it would be implementing several new technologies in upcoming versions of its operating systems designed to identify and flag child pornography. Sounds like a good idea, right? Well, yes… and also no.

If that sounds like there’s some nuance, you’re right. But at least we can all relax in the knowledge that humans handle nuance super well.

Yeah, the horny one has seen a number of reactions to this ranging from “ALL IS WELL, CITIZEN!” to “THEY TOOK AWAY MY NAME AND GAVE ME A NUMBER!” The Washington Post, which loves to “AH-HA!” Apple, provided a piece that is resplendent in both its flamboyant j’accusosity as well as its numerous inaccuracies. In footnotes to his piece on Apple’s announcement, John Gruber details exactly what’s wrong with The Post’s piece.

It is hilarious to the Macalope that the domain of Apple gotcha-ism has graduated from the Forbes contributor network and red tide clambake to the pages of the paper that took down Nixon. Most recently The Post tried to make hay from a report sample that had more iPhones being infected with spyware than Android devices. The problem being that the reason more iPhones showed up infected was simply that iOS’s logs are better and the researchers cautioned against making relative judgments of the platforms based on their findings.

But before you either have a fit about Apple’s new anti-child pornography technology or rush to Apple’s defense, the Macalope suggests you read Apple’s page on Child Safety, Gruber’s aforementioned post on this topic as well as Ben Thompson’s (subscription) if you can. The Macalope knows knowing things before you opine on them is old-fashioned but just humor him this once, can’t you?

Short story, though, there are two controversial aspects of this. First, if you store your photos in iCloud, Apple will be comparing their identifiers to a database of identifiers of known child pornography compiled by a government organization tasked with tracking such material. If it finds a number of such images over a certain numerical threshold, Apple will be alerted and look into the situation. The automated system doesn’t look at or scan the images, it just checks identifiers to see if any are known child pornography.

Second, Apple is providing an opt-in option for photos messaged to or from child iOS iCloud accounts. In describing this feature Apple says:

Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages.

The concerns about the first technology center around what will happen if a government comes to Apple with its own database that includes images of political dissent or support for “subversive” ideas such as underhanded toiler paper rolling which, while inherently wrong, should not be persecuted? Apple has stated categorically that it will reject any such requests.

The Macalope certainly hopes they stick to that because when you give a dictator knowledge of cookie-stealing technology… he’s gonna wanna steal some cookies.

That technology only examines photos that are stored in iCloud Photos. If you want to opt out, don’t store your photos in iCloud. That’s a pain for many, but at least it’s consistent with Apple’s existing agreement with law enforcement that while it can’t/won’t unlock iPhones, it can/will unlock iCloud backups.

The complaints about the second technology are that while it’s opt-in and currently only targeted at child accounts, it is dangerous to have on-device scanning of images implemented in a way because… things change! Musical styles, tie widths, and, yes, the lengths to which governments will go to get companies to poke into the lives of private citizens.

Given Apple’s commitment to keeping Messages encrypted during transmission, this is the only way to do this. The only other alternative is to simply not scan for these images at all. You can argue that’s what Apple should do. But it’s worth noting that, as Thompson points out, governments are moving toward mandating companies look for child pornography, partly because it’s really one of the worst things imaginable and partly because “Won’t someone think of the children?” really sells with voters.

Protecting children is great. We all want that. What we don’t want is to make it easier for bad people to take advantage of our good nature.

And, as John Gruber notes, it’s possible that Apple is implementing these technologies as a precursor to “providing end-to-end encryption for iCloud Photo Library and iCloud device backups.” In other words, as a way to cut off complaints of law enforcement and legislators when it provides more individual privacy protection. The Macalope would like to believe that’s what’s happening but, at this point, it’s just speculation.

While the Macalope rejects those who have gone from zero to 60 speeding down Outrage Boulevard in their histrionics mobiles over this, it is not wrong at all to be concerned about these new technologies. All this is definitely something we need to keep our eyes on. The bottom line for companies in a capitalist system is the bottom line, and given other decisions Apple has made to comply with anti-privacy laws in foreign states, it’s not unreasonable to be concerned about how this could be perverted. Apple executives surely care about both protecting children and privacy, but despite how we treat companies in this country as individuals, companies do not have souls.

With the possible exception of Dolly Parton Enterprises Inc.

Apple Privacy exec details system to detect CSAM in new interview

9to5Mac 12 August, 2021 - 07:10am

- Aug. 10th 2021 9:01 am PT

Last week, Apple announced three new features that target child safety on its devices. While intentions are good, the new features have not come without scrutiny, with some organizations and Big Tech CEOs being against Apple’s announcement.

The company published a FAQ about all of these new features and how they will work. Now, trying to avoid more controversy, Apple Privacy head Erik Neuenschwander addressed concerns about its new systems to detect CSAM in an interview with TechCrunch.

These features to protect children use CSAM detection in iCloud Photos, Communication Safety in Messages, and Interventions in Siri and search. Although these measures were announced together and are correlated, they are used for different things. For example:

You can learn more about all of this here.

In the interview with Neuenschwander, TechCrunch addresses some of the users’ concerns. For example, Neuenschwander explains why Apple announced the Communication Safety feature in Messages alongside the CSAM detection in iCloud Photos feature:

As important as it is to identify collections of known CSAM where they are stored in Apple’s iCloud Photos service, It’s also important to try to get upstream of that already horrible situation.  (…)  It is also important to do things to intervene earlier on when people are beginning to enter into this problematic and harmful area, or if there are already abusers trying to groom or to bring children into situations where abuse can take place, and Communication Safety in Messages and our interventions in Siri and search actually strike at those parts of the process. So we’re really trying to disrupt the cycles that lead to CSAM that then ultimately might get detected by our system.

Another concern centers on governments and agencies trying to find a backdoor with this measure, which Neuenschwander explains that Apple is going to “leave privacy undisturbed for everyone not engaged in illegal activity.”

Asked about whether Apple should be trusted if a government try to compromise this new system, the Apple Privacy head says:

Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the US when they speak in that way, and the therefore it seems to be the case that people agree US law doesn’t offer these kinds of capabilities to our government.

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled.

Neuenschwander also reinforces that if iCloud Photos is disabled, NeuralHash will not run and will not generate any vouchers:

If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you’re not using iCloud Photos.

Read more:

FTC: We use income earning auto affiliate links. More.

Breaking news from Cupertino. We’ll give you the latest from Apple headquarters and decipher fact from fiction from the rumor mill.

Brazilian tech Journalist. Author at 9to5Mac. Previously at tv globo, the main TV broadcaster in Latin America.

Got tips, feedback, or questions?

iPhone 13 Pro models to add Portrait video mode, more

Roundup: Here’s what’s new in iOS 15 beta 5

There’s an Apple Store hiding in your home

Apple privacy head explains privacy protections of CSAM detection system | AppleInsider

AppleInsider 10 August, 2021 - 11:36am

You are using an outdated browser. Please upgrade your browser to improve your experience.

Copyright © 2021, Quiller Media, Inc.

Apple's privacy chief Erik Neuenschwander has detailed some of the projections built into the company's CSAM scanning system that prevent it from being used for other purposes - including clarifying that the system performs no hashing if iCloud Photos is off.

The company's CSAM detection system, which was announced with other new child safety tools, has caused controversy. In response, Apple has offered numerous details about how it can scan for CSAM without endangering user privacy.

In an interview with TechCrunch, Apple privacy head Erik Neuenschwander said the system was designed from the start to prevent government overreach and abuse.

For one, the system only applies in the U.S., where Fourth Amendment protections already guard against illegal search and seizure.

"Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren't the US when they speak in that way," Neuenschwander said "And therefore it seems to be the case that people agree US law doesn't offer these kinds of capabilities to our government."

But even beyond that, the system has baked-in guardrails. For example, the hash list that the system uses to tag CSAM is built into the operating system. It can't be updated from Apple's side without an iOS update. Apple also must release any updates to the database on a global scale — it can't target individual users with specific updates.

The system also only tags collections of known CSAM. A single image isn't going to trigger anything. More then that, images that aren't in the database provided by the National Center for Missing and Exploited Children won't get tagged either.

Apple also has a manual review process. If an iCloud account gets flagged for a collection of illegal CSAM material, an Apple team will review the flag to ensure that it's actually a correct match before any external entity is alerted.

"And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don't believe that there's a basis on which people will be able to make that request in the US," Neuenschwander said.

Additionally, Neuenschwander added, there is still some user choice here. The system only works if a user has iCloud Photos enabled. The Apple privacy chief said that, if a user doesn't like the system, "they can choose not to use iCloud Photos." If iCloud Photos is not enabled, "no part of the system is functional."

"If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image," the Apple executive said. "None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you're not using iCloud Photos."

Although Apple's CSAM feature has caused a stir online, the company refutes that the system can be used for any purposes other than detecting CSAM. Apple clearly states that it will refuse any government attempt to modify or use the system for something other than CSAM.

It would be helpful (rather than just trashing the offered fix from Apple) to offered an alternative solution - unless we wish to state that there is not a problem to solve.

Criticism is easy but solutions are difficult - buts lets try.

Police and government agencies have tried to keep secret that they've bought GreyShift's GreyKey iPhone unlocking technology.

Apple's major supplier Foxconn, is to take on assembling iPhone camera components instead of fitting completed units.

Apple has shared a new "Today at Apple" lesson about shooting looping videos using the Clips app on iPhone.

Klipsch refreshed its T5 II True Wireless earphones to include active noise cancellation, bringing it into the same market as Apple's AirPods Pro. Here's where the newcomer stands against Apple's personal audio offering.

Apple may have market dominance with its AirPods Pro, but neither AirPods Pro nor the new Beats Studio Buds can match the luxurious build quality of Master & Dynamic's new MW08 Sport true wireless earbuds.

With the launch of the Beats Studio Buds, Apple now has a total of four wireless earbuds it sells to consumers. Here's how the newcomers face against AirPods, AirPods Pro, and Beats Powerbeats Pro.

Sony launched the WF-1000XM4 wireless earbuds in June, its noise-cancelling rival to the Apple AirPods Pro. Here's how Sony's latest stacks up against Apple's heavy hitter.

In April, Amazon upgraded its Echo Buds to make it a better value-oriented rival to Apple's AirPods. Here's how Amazon's offerings stack up to Apple's AirPods and AirPods Pro.

Police and government agencies have tried to keep secret that they've bought GreyShift's GreyKey iPhone unlocking technology.

Apple's major supplier Foxconn, is to take on assembling iPhone camera components instead of fitting completed units.

Apple has shared a new "Today at Apple" lesson about shooting looping videos using the Clips app on iPhone.

Klipsch refreshed its T5 II True Wireless earphones to include active noise cancellation, bringing it into the same market as Apple's AirPods Pro. Here's where the newcomer stands against Apple's personal audio offering.

Apple reported Q3 2021 iPad revenue at $7.4 billion, an increase of 12% in dollar-value. Did the M1 supercharge iPad sales volumes, or is something else going on?

In the latest "This Week in Apple," we discuss Apple's new child safety initiative including CSAM detection in photos, leaked specs for Thunderbolt 5, leaked regulatory filings for Apple Watch, and more.

Apple may have market dominance with its AirPods Pro, but neither AirPods Pro nor the new Beats Studio Buds can match the luxurious build quality of Master & Dynamic's new MW08 Sport true wireless earbuds.

On the newest "This Week in Apple," we break down Apple's recent earnings including why Apple's stock dipped, talk about the latest Intel Mac Pro rumors, and more.

Twelve South has updated its popular SurfacePad cover for the iPhone 12 line — complete with MagSafe support. It is an amazing accessory overall, but small frustrations still give us a bit of hesitation.

The Plugable 7-in-1 USB 3.0 charging hub is an excellent way to add more USB-A ports to your setup, but it lacks other connectivity options like USB-C or HDMI.

Harber's series of leather cases now includes a strong, appealing Magnetic Envelope Sleeve that's officially for the iPad Pro, but can be bought for all current iPads.

Whether you're an outdoorsy type or someone who simply likes the convenience of adding high-quality sound to a pair of everyday glasses, the second-generation Bose Frames are a nice upgrade over the company's previous offering.

Get the Apple Watch out of the way of your workouts by using the Twelve South ActionSleeve 2, a simple sleeve that places the Apple Watch on your bicep.

Apple Remains Committed to Launching New Child Safety Features Later This Year

MacRumors 10 August, 2021 - 08:58am

First, an optional Communication Safety feature in the Messages app on iPhone, iPad, and Mac can warn children and their parents when receiving or sending sexually explicit photos. When the feature is enabled, Apple said the Messages app will use on-device machine learning to analyze image attachments, and if a photo is determined to be sexually explicit, the photo will be automatically blurred and the child will be warned.

Second, Apple will be able to detect known Child Sexual Abuse Material (CSAM) images stored in iCloud Photos, enabling Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC), a non-profit organization that works in collaboration with U.S. law enforcement agencies. Apple confirmed today that the process will only apply to photos being uploaded to iCloud Photos and not videos.

Third, Apple will be expanding guidance in Siri and Spotlight Search across devices by providing additional resources to help children and parents stay safe online and get help with unsafe situations. For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report.

Since announcing the plans last Thursday, Apple has received some pointed criticism, ranging from NSA whistleblower Edward Snowden claiming that Apple is "rolling out mass surveillance" to the non-profit Electronic Frontier Foundation claiming that the new child safety features will create a "backdoor" into the company's platforms.

"All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children's, but anyone's accounts," cowrote the EFF's India McKinney and Erica Portnoy. "That's not a slippery slope; that's a fully built system just waiting for external pressure to make the slightest change."

The concerns extend to the general public, with over 7,000 individuals having signed an open letter against Apple's so-called "privacy-invasive content scanning technology" that calls for the company to abandon its planned child safety features.

At this point in time, it does not appear that any negative feedback has led Apple to reconsider its plans. We confirmed with Apple today that the company has not made any changes as it relates to the timing of the new child safety features becoming available — that is, later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. With the features not expected to launch for several weeks to months, though, the plans could still change.

Apple sticking to its plans will please several advocates, including Julie Cordua, CEO of the international anti-human trafficking organization Thorn.

"The commitment from Apple to deploy technology solutions that balance the need for privacy with digital safety for children brings us a step closer to justice for survivors whose most traumatic moments are disseminated online," said Cordua.

"We support the continued evolution of Apple's approach to child online safety," said Stephen Balkam, CEO of the Family Online Safety Institute. "Given the challenges parents face in protecting their kids online, it is imperative that tech companies continuously iterate and improve their safety tools to respond to new risks and actual harms."

iOS 15 is available as a public beta. Here's how to install it.

macOS Monterey is now in public beta. Here's how to install it.

MagSafe Battery Pack support, the ability to merge two existing Apple Card accounts, HomePod timer management in the Home app, and more.

Learn all about Apple's new location trackers.

New features for FaceTime calls, tools to reduce distractions, a new notifications experience, added privacy features, complete redesigns for Safari, Weather, and Maps, and more.

Updates for Safari, FaceTime, and many other apps, Universal Control to let a single mouse or trackpad control multiple devices, new Shortcuts app, machine-learning Live Text detection and Visual Lookup, and more.

Redesigned with flat edges, Apple silicon, more ports, improved display, no Touch Bar, and the return of MagSafe charging.

Rumored design changes include shorter stems like current AirPods Pro, but without advanced features like active noise cancellation.

Technology Stories