Hacker Newsnew | past | comments | ask | show | jobs | submit | chrisballinger's commentslogin

At first I was a bit confused about why The Onion’s spin-off site had a cloud offering, but that one is called ClickHole.


In theory couldn't it tunnel ethernet over the TB4 cable?


if you're connecting the display to a machine, why not just use the machine's OS at that point?


You could plug the monitor to a router through a Thunderbolt or USC-C to Ethernet adapter. An A13 SOC just for image and audio processing feels overkill, and making the screen autonomous would have been a great way to justify it.

Although making a new user interface on top of iOS is a lot of work. Bare iOS SpringBoard on a $1.6k display won't cut it. tvOS would've been great if the monitor had wireless, but now you need to bump the price, and it feels like you're selling a very overpriced 27 inch smart TV. And I don't think the A13 can smoothly drive a 5k display.


You can now renew your AppleCare+ on eligible iPhones, iPads and Macs at a monthly/yearly rate after the 2-3 year coverage period expires as long as you do it within 30 days of expiration: https://support.apple.com/en-us/HT210580

Would highly recommend extending for a laptop needing this many repairs.


Only in the US afaik...


Very interesting concept, but I'd recommend people check out GRDB[1] if you're in search of a mature persistence layer for your iOS apps. It's a modern SQLite wrapper with a lot of conveniences for application development like value types, Codable mapping, Combine observables, etc.

I've deployed it in production on a few apps so far, and it has been a real joy to use.

1. https://github.com/groue/GRDB.swift


Sure. I like the idea of FlatBuffers. I think GRDB is also good as I'm reading more and more about it. Thanks for sharing.



I would love to see all “Share the Road” signs replaced with “Bicycles May Use Full Lane” [1] signs. This is the law in California but many motorists are unaware, or don’t care.

1: https://en.wikipedia.org/wiki/Bicycles_May_Use_Full_Lane


This makes it seem like it would be some kind of exception. Bikes can ALWAYS use the full lane!


Perhaps put it on billboards so that it's clear that it applies to all roads, not just that one road.

I'd also suggest putting it on driving tests, but I've never taken a state driving test where they go through and make sure you understand which questions you got wrong.


I agree.

Though my dad thinks you only have to slow for pedestrians where there is a posted sign reminding you of that fact. So you can't win.

("Can't win" doesn't mean you shouldn't try.)


Agree, my personal experience: walking from Walmart parking lot to walmart doors, crossing the hatched pattern section of asphalt exactly in front of entrance, half of the drivers do not stop even after looking at me. Sunny day, perfect visibility. Might be the reason that its a majority white republican county with pickup trucks.


I would appreciate these disclosures a lot more if the author didn’t always include a flippant dismissal of security architecture improvements in macOS. Yes, it’s harder to write software with sandboxing and other modern security techniques, but that doesn’t mean we should go back to how things were.


It's not flippant, read through the author's history: https://lapcatsoftware.com/articles/index.html

This is a serious stance of his, with a lot of serious data and arguments to back it up, from a serious engineer who has written an impressive list of Mac software both for Apple and for Apple's customers.


You did use the word serious enough to make it compelling. But the author’s biography doesn’t mean that his comment wasn’t flippant.

He’s proved that an well-behaved, codesigned app can list file metadata about files in restricted directories. He hasn’t proven the sandbox compromised.

You claim he has so much serious evidence, link us there. Don’t just string adjectives together.

I have great respect for Jeff, but he is one of the more outspoken complainant Apple devs. At least he has a better basis for his commentary than DHH.


A well behaved, codesigned app being able to list metadata about files in restricted directories is a sandbox compromise. In what viewpoint is it not?


As pointed out by the most voted top level comment it's a kernel issue.


That doesn't mean it's not an issue.

I would like Apple to not roll out BS prompts that make my life more difficult until those prompts are actually capable of protecting some of the most sensitive data on my machine.


A kernel issue where it fails to adequately enforce the sandbox?


whatever man. you had a good go at me the other day. you're right I'm wrong, and HN is no longer the place for me


Did I? The only other interaction I had with you recently that I can find is a discussion about Apple's security policies, which seemed fairly reasonable to me.


The biggest issue with the author is that he complains both about the controlling/locked down nature of Apple’s platforms and about any bugs that show up in that system.

I.e. His goal is to criticize Apple no matter what they do, because he dislikes the fact that they are no longer producing the kind of open system he prefers.


I think the angle he has is “Apple should remove these protections because they can’t implement them correctly”.


Yeah, which doesn’t seem reasonable, especially when mixed in with a bunch of assumptions about ill intent.

There are an enormous number of protections and a small number of issues, which do eventually get fixed, and of course the threats are undeniable.

However you are right that Apple is notoriously had at communicating about bugs.


At least in this case, the lack of reaction from Apple shows that his accusations are not baseless. Don't blame it on the messenger.


My thoughts exactly. If this was just an overlooked bug, which was reported to Apple and which Apple then fixed, that would be the system working as intended.

In reality, a very simple bug was reported more than a year ago, and Apple apparently hasn't cared enough to fix it. The only way I can interpret that is to conclude Apple doesn't really care about the integrity of their sandbox.

IMO, this more than justifies the author's accusation of "security theater". My browsing history is among the most sensitive data on my machine—certainly more private than anything in my Documents folder, which Apple felt the need to protect in a highly-disruptive way. I agree that it can be worth trading some degree of usability for privacy and security, but only if those privacy benefits are real. If they're not, then we're left in the worst of both worlds.

It's really quite damning.


> The only way I can interpret that is to conclude Apple doesn't really care about the integrity of their sandbox.

There are many other ways to interpret it. Here is one completely made-up example that I created just now for this reply:

"Apple can't lock this down further without breaking open() calls in the majority of existing applications; therefore, they made a pragmatic choice to allow this issue to exist until their long-term roadmap plan to remove direct disk access to protected folders ships in a future macOS update; while declining to share their decision with the reporter, as is completely normal for Apple."

If you define "security theater" as "any practice that would not stand up to a human attacker", then all security is guaranteed by definition to be security theater, since all security protections will be found to have weaknesses, compromises, and design decisions that could be theoretically exploited. That definition is clearly non-viable in reality, and so all security decisions — even Apple's — will have unpalatable outcomes that do not invalidate the relevance of security.


> while declining to share their decision with the reporter, as is completely normal for Apple

This is completely normal for Apple, but that doesn’t make it OK for them to treat security fixes like product launches where they can choose an arbitrary timeline and keep the reporter hanging forever.


Sure but it also doesn’t justify innuendo about Apple not caring about privacy.

You know as well as I do that this stuff is complicated.


Yeah, it probably is; maybe it requires substantial changes in the kernel or something. The issue is that Apple never communicates this, they just sit on bugs until they fix them. This is a really poor experience for people reporting issues.


These security features are only nominally about protecting the user. Apple implements them to protect their services and platforms from competition and sells them via the privacy argument.

Does it happen to improve the security situation? Yes, for many people it does. Is it worth the cost? That's debatable, especially because of Apple's apparent apathy (and occasional hostility) towards the community.


> These security features are only nominally about protecting the user. Apple implements them to protect their services and platforms from competition and sells them via the privacy argument.

Stallman[1] and others[2] have talked about just this issue for over a decade now.

[1] https://www.gnu.org/philosophy/can-you-trust.en.html

[2] https://www.cl.cam.ac.uk/~rja14/tcpa-faq.html


This is explicitly untrue.


Haven't you ever thought that the flippant attitude is exactly why you are able to learn of this now (and not 10 years later/if at all)? If everyone was "policy abiding" folks who will give a megacorp the benefit of the doubt, these won't be disclosed for years.

The flippant attitude is exactly why and how you are reading of all these vulnerabilities now.

Knowledge of the issue (but enduring "flippancy") or not knowing it at all? You pick.

What I'm really saying is that this "flippancy" is the agency that's making someone write a blog post, sign their name to it, put it out there with code samples, etc. You dismissing "flippancy" is insulting the agency of this. Without that emotion, that idea where they thought Apple wasn't treating them well, that is the source where people find the energy to publish, to publicise.

Every single word takes strength to write. In this case, the flippancy was the driving force and it shows clearly.

Why would you dismiss that energy?

And no, it's not the author's job to "shield" you from the wrath of their flippancy. I take it and I thank "flippancy" for disclosing this issue.


We shouldn’t put the burden on the bug finder to be “nice” and persistent about doing Apple’s job for them. We should put the burden on Apple - the first trillion dollar company - to take bug reports seriously. Given that iOS and macOS have pretty novice security vulnerabilities (allowing apps to view Safari browsing history, allowing iOS apps to detect if a device is jailbroken), why is it up to the bug finder to be nice about it?


Then you should probably stop reading security disclosures. Security researchers tend not to be terribly considerate of egos.

> but that doesn’t mean we should go back

I don't think you understand the author's stance.


If you’re concerned about your carbon footprint and live within 2.5 miles of work, you might want to consider a bicycle instead! Regular cycling has made a profoundly positive impact to my life, and I would highly encourage you to explore that as an option.


I think you're right. I've been thinking for a while that the real future in electric vehicles is not Teslas or any EV to replace a car but Ebikes. Cheap, great range and you're getting some exercise and for every person on a bike. Other traffic flows better.


I'll go a step further and say that this is in a way more wasteful. For the previous commute, driving was at least necessary. Driving 2.5 miles is like having a gas-powered robot fetch the morning newspaper.


Ebikes are good options too


If you’re concerned about your carbon footprint

If he was concerned about his carbon footprint, he would think producing 2 tons of CO2 a year through programming to be unimportant.


If you keep a customer with a recurring subscription for a year, Apple reduces its cut to 15%. This applies to all developers: https://www.apple.com/ios/app-store/principles-practices/


But based on the articles I've read, it sounds like it's 15% for all new subscriptions too, not just recurring subscriptions over a year.

https://www.theverge.com/2020/7/30/21348108/apple-amazon-pri...

https://www.bloomberg.com/news/articles/2020-07-29/apple-con...


Yes but Apple has stated that it’s a special program for video subscriptions that other developers can and has joined even before Apple (eg: Canal+). The terms are not public but it looks like they’re made available as an option to developers that have apps in that market. It looks like the discount is available if your app implements all features in the Apple video ecosystem (AirPlay2, Apple TV native app, etc.)


Whenever end-to-end encryption is not used, scenarios like these are bound to happen eventually.

As far as I know, the only home surveillance products that use E2EE are ones that support HomeKit Secure Video [1].

1. https://support.apple.com/en-us/HT210538


These kind of scenarios can happen with workers in government offices, archives and medical institutions as well. And yet the paper documents are not E2E encrypted.

Maybe... just maybe... technology is not really what should be the core issue here? But we should perhaps look at our policies and legislation? Adding proper liability there will make technology come by itself. The magic of free market doesn't seem to be working here.


Any problem is easy if you oversimplify it.

The cultural conceit of 'disruptors' is that society has made everything complicated and therefore society is 'ripe for disruption' which if you read between the lines means 'stupid'. Lack of respect means lack of care. Lack of care leads to injury (theirs, and/or ours).

You are right. It's not the tech. It's the arrogance.

From my knothole, legislation comes for things that aren't policing themselves adequately. I think what we are discovering is that there are a lot of domains where the old guard were self-policing to a degree, and the newcomers have absolutely no reverence for anything.

I expect it won't be long before you'll see industries taking a hard look at their internal culture, and then engaging in regulatory capture to keep out the disruptors.


The easiest way to keep someone out is to lock the door.

You can create penalties, punishments, hire security guards to watch the door. But the most efficient and effective way is just a lock.


That's absolutely not true. Most doors are trivial to pick and as easy to break down.

The main, and usually only, real reason for the lock on the door is to serve as a physical symbol which establishes a particular legal status of the property behind the doors, with associated consequences for unlawful entry. The legal apparatus - penalties, punishments - is what deters crime. Lock is an XML tag made of matter.

(The additional, secondary role of a lock is being a trivial inconvenience. Not enough to deter a thief determined to rob your place, but enough for a thief determined to rob a place to skip yours and pick a different one.)


Forget the analogies having to explicitly misuse the system to violate customers privacy creates a strong disincentive.

All accesses to customers data should require multiple people not by policy but by mandatory access controls.

The fact that employees could hack their employer is true and not meaningful.

The number willing to commit felonies is less than the number willing to risk termination.


I feel like I asked this before and then didn't bookmark the answers.

What systems are out there for requiring consensus for access? I know about K of N protocols for hardware cryptography, but I'm fuzzy on such systems for, say, admin functionality or data retrieval. Are they all in-house at this point?

I've found over and over again in my work that it's much easier to spout rhetoric about process change when I have provided tools to facilitate those changes. Maybe it's time for us to collaborate on some tooling in this space.


I can't remember which company it was, but I once read an article about a company who implemented their own version of `sudo`. Their version required another developer to approve your session before granting root privileges, and then allowed them to watch everything you did.


I was addressing the lock analogy itself, but going back to the original topic, I believe this line of thinking still applies to an extent. Setting up hoops one has to jump through to do something nefarious is as much about the difficulty of jumping as it is about the very act of jumping. If you have to work around some security features to access customer data, you can't defend yourself by saying you've accessed it "somehow" or by accident.


Protecting property is not the only use-case for locks.

There are also locks on e.g. cell doors in prisons. Those are pretty essential to the function of the cell, and tend to survive anything prisoners might try to do to them.

There are also locks (specifically, interlocks) on e.g. dam spillways, or on the airlocks on submarines. (For these, the "key" is a button somewhere else that's not necessarily itself secured, but it is still very crucial that they keep things out when that button has not been pushed.) They hold up pretty well—even against malicious infiltrators—mostly because they fail closed and have no UI components mechanically linked to the locking mechanism.


I've never had a window or door broken, but if I left my door unlocked, everything would be stolen. I've had stuff stolen that was outside, even had people try the locked door while I was inside. It doesn't seem to matter that my house is actually pretty easy to break into, as long as I lock the doors. So I would agree that the easiest way to keep people out of my house is to lock the door.


I must disagree, although I find what you are saying an important part of the defenses, and likely a larger issue in certain parts of the country and certain neighborhoods..

I think the gp is not 'absolutely not true'.. I have a fair amount of hobby interest experience dealing with petty thieves / criminals for the past couple decades; studying them locally and through polls and news articles... stories about locked up thieves admitting they will generally skip houses that have big dogs and security systems for example.

Certainly there are certain types of people to take into consideration from what you are mentioning, and petty criminals vary from locale to locale in significant ways sometimes. From what I understand places like frisco often have car hoppers busting out windows of cars on a regular basis, however in my area they generally only check for doors to be locked or unlocked when choosing to rummage through a car. A portion of the criminals around here will make an exception and bust a window if they see a purse or briefcase, but generally move on to the next without making too much noise, for example.

In most neighborhoods seeing someone crouched down playing with a door lock would attract attention and likely calls to the police. Kicking in a door would also create an amount of noise that brings attention the average criminal does not want to deal with.

Sure if a delivery person has seen you have a box of gold and sapphires next to the door (or notice your vintage guitar collection hanging on the walls while trick or treating) - they may target you with a door kick / other means of juice that is worth the squeeze..

but most of the thieves in my area will skip the locked houses and move to the next softer target. (often ringing the doorbell to see if anyone is home first)

I don't think most petty thieves are willing to learn lock-picking, even though it's easier to learn today than it was 20 years ago.. The added time it takes is not really worth it. (for most in most situations)

It's easier to find a neighbor that has a window air conditioner that can be pushed in with ease (at least around here, this technique in Minnesota may not be used as often)

The only place I can think of in regards to "establishes a particular legal status of the property behind the doors, with associated consequences for unlawful entry." would be Kennesaw, Ga - every person who lives there has a gun - there, the legal status of kicking in a locked door and it's associated consequences are proportionally different than most apartments in NY.

Some of the street thugs know that robbing with tools (that can be labeled burglary tools) carries an extra charge, just like robbing with a loaded gun is different time for the crime of stealing using threat of other force..

I do agree that certain situations / threats make "The additional, secondary role of a lock is being a trivial inconvenience. Not enough to deter a thief determined to rob your place, but enough for a thief determined to rob a place to skip yours and pick a different one." true - but that does not make the above statement absolutely not true.

I think you are both right.


You're right that I shouldn't have said "absolutely not true", but I stand by my general message. Regular locks are inconveniences for thieves, not deal breakers.

> In most neighborhoods seeing someone crouched down playing with a door lock would attract attention and likely calls to the police.

Not if that someone is wearing a hi-vis safety vest (perhaps with "Cory & Trevor Locksmith Company" or something similar written on it).

My point is that the effectiveness of locks primarily comes from laws and economics, not from their physical properties.


Follow-ups to your comment have responded to your analogy with discussions of lock-picks, firefighter exceptions, and safes vs doors.

The point I think you were trying to make is that it's mathematically possible to create a cryptographic lock that's inviolable. This is a different way of thinking.

I agree strongly - it's one thing to have a process or rule on what to do, and another to build a system that forces the processes and rules to be followed.

(I say inviolable, but I'm aware you can typically defeat a cryptographic lock by taking a crowbar to the physical locks watched by your Ring doorbell and subsequently using the crowbar on the person whose mind holds the key to said lock...but that's not the lock's fault.)


Actually, the best way to keep people out is to convince them it is not worth it to try to get in.


I agree, but I would use the safe analogy instead of door, because locks only keep honest people honest, just like rules.


Locked doors can kill if a fire breaks out in your house.


Typically, at least in the US, a locked door can be opened from the inside without a key for that reason.


So while that is true (and not to go too far off into the weeds on an analogy), in an emergency, people are trying to follow procedure under pressure and the odds of error in operation of an interface increase. You want the interface that is used in an emergency situation to either be well practiced or absolutely as intuitive as possible.

To destructure the analogy and give a concrete example, if I'm dying of allergic shock I don't want my doctor unable to access my medical history because somebody in the process can't remember how to "break glass" on the encryption on my medical records, even if there's a procedure to do so. I want my records in plain text format and as readable as possible.


Your concrete example would never happen the real world. If you were in anaphylactic shock, no doctor is going to go off looking for your medical records first, even if they were sitting on the table next to him. He's just going to stick you with epinephrine and then MAYBE look at your medical records later.

All that said, I get your point, but I'm not sure how it applies to this discussion anyway.


My point is there's a tradeoff between prevention and penalty. Sometimes, the best option isn't a locked door; it's a clear sign saying "trespassers will be prosecuted" and a security camera (i.e. auditable access, not preventative access denial). That way, people can get in if they need to to do something critical, and one can resolve the question of trespass later.


True, but:

- If you normally keep the door locked, unlocking it with the thumb turn is what you do every time anyway

- In cases where there are many people expected to use doors that they're unfamiliar with, it's typically even simpler to exit (panic bar on business fire exits, automatically-unlocking deadbolt on hotel doors, etc)


If you keep the door locked at all times, unlocking it should be pretty routine.


I'm a strong proponent of both approaches. If surveillance infrastructure is in place, and all you have protecting you is law, it only takes one small change, or one warrant, to lose all your privacy (and you won't even find out about it). On the other hand, if the law forbids privacy, technological solutions won't withstand for very long, especially when you can be compelled to hand over your passwords or face jail.


This brought back memories of doing data entry for an insurance company as a teenager. I spent eight hours a day transcribing people's names, addresses, SSNs, and medical ailments, including all sorts of sexually transmitted diseases.

It's weird, now that I think about it. I was just some kid they hired as a temp. We've never really known who's looking at our private data.


I agree, there should be a legislative intervention here. These devices come out semi-regularly with no regard to security.


I don’t know if Ubiquiti’s feeds are streamed encrypted, but at least the recording infra is 100% local and can be accessed locally without any cloud middleman if desires.


Best I could find[1] but I think the forum question is about having a NVR in another site with a VPN connection in between the site and camera.

[1] https://community.ui.com/questions/Are-Unifi-Video-streams-e...


The only product currently out that supports this is the logitech Circle.

https://www.apple.com/ios/home/accessories/#section-camera


Unless I COMPLETELY misunderstand encryption, E2E encryption only protects your data in transit. It does not mean that data on servers are encrypted NOR does it mean that servers don't have decryption keys to that data if it is encrypted.

Am I wrong about this?


Your confusion is around where the end is in this case. E2E would be encryption from the ring device to your other device being used to view the feed (your cellphone for instance). Part of the difficulty in that case is getting the encryption key securely transferred between the two devices without exposing it to anyone else (a non-trivial problem). Assuming that was done in this case Ring employees would only have access to the encrypted videos with no access to the decryption keys to actually view them.

E2E Encryption is usually referenced in messaging applications where the ends are understood to be the two communicating parties, while in this scenario it's a little more nebulous.


In short, yes, because end-to-end implies only a single producer and consumer have access to the data. Storage in the cloud wouldn't be an "end", and therefore it must be encrypted at that stage. The ends are 1) where the data is created by the device, and 2) wherever it is viewed on retrieval by the end user. While it's in the cloud it's still "in transit".

Facebook, if I recall correctly, at one point seemed to be trying to redefine the term to be "encrypted on its way to us and then back out again", which IMO is nothing short of propagandizing to confuse people, I assume to foil demand for real E2E encrypted products and gain unearned trust.


At least in Apple's case, they do not have the keys because it is encrypted by your devices and then uploaded. It is then only able to be read by your devices because they have the keys to un-encrypt it.


The latest Apple platform security doc (fall 2019, available as pdf) does a half-decent job of explaining their key distribution mechanisms (iCloud Keychain, they call it) too. They are doing some pretty complicated stuff under the hood to support multiple devices (trust circles, they call it).

I just wish I could read the source code to make sure theory and practice are reasonably congruent.


I am curious if there are others.

But as soon as a camera came out that supported this I finally got one (flat out refused to get one before... even though I wanted to get one).

It feels pretty good knowing its stored encrypted in my iCloud and all of the processing happens on my devices (HomePod and Apple TV)


Wyze has End to End encryption for their cloud stuff, or you can save it all on an SD card instead. Wyzecams are also really cheap $20 but they dont have a doorbell, so for now I'm keeping Ring (came with my house) till I see a good alternative.


Please correct me if I am wrong, but Wyze doesn't actually encrypt the files they store, in their case end to end just means that the files are secure in transmission. Apple secure Video actually encrypts the files so they can't be viewed by Apple.


You might be right all I was able to find was this:

https://forums.wyzecam.com/t/meaning-of-end-to-end-encryptio...

I can always just revert to using an SD card instead of sending my data over to them on the other hand. I am okay with this so far though, they delete videos after 15 days anyway.


Wyze is using the term end-to-end wrong, which is very disappointing but not surprising. They are considering themselves an end, which changes the meaning in a way to make the term totally meaningless. The end in end-to-end is end users.


How is Ring better than a Wyze cam pointed at your front door? Genuinely curious. I have several Wyze cams but have never interacted with a Ring much other than pushing the bell button at someone else’s house.


I prefer the Wyze cam all out, but it's already installed so I'm just leaving it there. I don't want to uninstall it until I have a better doorbell device or if theres serious security implications in keeping a Ring device.

I'm also waiting for the outdoor / weather resistant Wyze cams that are set to come out this year, so I can put them facing the front door from the front part of my porch. I hope to have it trigger a Wyze lightbulb, though I'm not sure if they're that smart or if they need the sensor instead.

Edit:

Matter of fact, I have a window next to the Ring where I have a Wyze cam looking outside, but the IR doesn't work through window, so I'm just waiting for the outside Wyze cams to be a thing. Once I get those I might repurpose some of the regular wyze cams to do time lapse videos of the weather from upstairs. Florida has interesting weather.


I also have a Wyze cam on the inside of a window looking out, and you're right that the IR would just reflect in the window, making it useless at night. But I think you can put an IR bulb (or LED array) outside, and just have that light up the area. I'm planning to get an LED array for mine when I have some free time. I also think that you probably don't really need a weather-resistant Wyze cam if it's mounted sufficiently close underneath eaves. Probably depends on how much wind you get in your area though.


The benefit of their upcoming outside cameras is that it will run on battery, and you plug it in only to recharge them.

That's a good idea, I had not thought about that, adding an IR bulb outside. I'm thinking of getting Wyze bulbs for outside though, I'm sick of coming home to complete darkness, just bought the home and there's no smart lights outside yet.

I do need to worry about weather mainly because I do live in Florida, and hurricanes do come through now and then. I will be sure to mount it securely though.


Why would end-to-end help when it's the other end that's watching?


The other end should be you too?

Unless you intend for someone else to oversee your surveillance operation, your footage shouldn't leave your premises unless encrypted, using keys which don't leave your possession. You enter them out-of-band on the device on which you wish to watch remotely.

Is there some implied benefit to not encrypting end-to-end or are they just being lazy and using nothing more than TLS because security isn't really the goal?


> The other end should be you too?

But that cannot work with a cloud-based Motion Detection feature (arguably the second most important feature of Ring doorbell cameras, after the doorbell functionality). The Motion Detection is done server side so the server has to be able to see unencrypted video. Maybe if there was a lot more powerful (and programmable) hardware on the camera side you could do it there.


Makes sense, I thought there would have to be some "good" reason.

Your wouldn't need anything much more powerful than a Pi4B to do that part for a couple of cams, but I guess this keeps the cost down for a security-unconscious public.


I don't see why you couldn't. The hardware to do it isn't expensive, so the camera itself could do that processing locally and just send the data along with the video encrypted to the end device. It might make the product cost a bit more, but it would also eliminate most of the concerns I have with that type of product.


In this context, "end to end" means being encrypted between the camera and the user's devices they use to watch the camera, with the cloud service acting as an intermediary between the two, and unable to decrypt the data.


Can't wait for (scalable) homomorphic encryption, where providers can serve you without ever knowing what's in your data.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: