Hacker Newsnew | past | comments | ask | show | jobs | submit | emgram769's commentslogin

>allowing valuable AIs to be trained in insecure environments without risking theft of their intelligence your system involves an unencrypted network and unencrypted data, it would be trivial to train an identical network

the idea of controlling an "intelligence" with a private key is silly. you can achieve effectively the same thing by simply encrypting the weights after training.

Can't someone simply recover the weights of the network by looking at the changes in encrypted loss? I don't think comparisons like "less than" or "greater than" can possibly exist in HE or else pretty much any information one might be curious about can be recovered.


great point. I don't think that LE or GT exist in this homomorphic scheme. :) Otherwise, it would be vulnerable. Checks such as this are what go into good HE schemes.


here's a pic of the expected output https://u.teknik.io/QWvJx.png


>This doesn’t have any effect on the phone, however. I have to tap back, return to the login screen, and enter my password again

I believe you can simply hit the login button (sans code) after validating the login attempt instead of closing the application.


check out how bluetooth LTE works, I think you'd be surprised at how little it can consume in a passive state



a computer increases performance 10 fold: what takes 10 minutes now takes 1. we notice a difference of 9 whole minutes! wow! a new computer increases performance 10 fold: whoaaaa, that minute of waiting is now merely SECONDS! we saved like almost a whole minute of waiting! an even newer computer increases performance 10 fold: nice, thats like noticeabley faster: today, the newest computer is available: meh something from like 10 years ago would run this fine...

we are getting to a point where other aspects of performance are also getting harder and harder to notice. things like rendering, supported display sizes and refresh rates, the ability for machines to understand us (voice recognition and computer vision)


someone made a mach-o lib for ruby: https://github.com/woodruffw/ruby-macho


We actually got this error as well and had to update to a newer version of OpenCV. 2.4.10 seemed to fix it


I helped create it at a recent hackathon. the ascii is rendered by calculating the intensity of any given pixel, mapping that intensity to a character based on how much that character fills up space ('@' would be bright and '.' would be dark) and then approximating the color of the pixel to fit within the 256 available

I've noticed your implementation uses websockets, which are built on top of TCP. we used UDP and encapsulated everything is a pretty simple to use (and easily the most well documented part of the project) p2plib.c. the fear was that video and audio can get backed up if we were to use TCP. so audio and video were sent via UDP packets and rendered every time a UDP packet was recieved.


You can go a bit further to get significantly higher color depths using ASCII block characters 176, 177, 178. By using the foreground and background ANSI colors and blending them by using these three characters (25%, 50%, 75%) you can achieve some quite realistic images. If you're more interested in resolution, using half block characters like 220 and 223 can allow you to double your vertical resolution at the cost of not being able to use the color trick.


I made something similar to this using a combination of 256 color mode and unicode halftone characters for "dithering" between the foreground/background colors.

https://github.com/dhotson/txtcam

Another thing I'd like to try to get a bit more resolution is to perhaps use the braille character set in combination with 256 color mode: https://github.com/asciimoo/drawille

Also, some terminals are starting to support 24bit color (iTerm2 nightlies) which could pretty drastically improve the possibilities of terminal based video: https://github.com/frytaz/txtcam/tree/color :)


I've tried doing this before but wasn't fully satisfied with my color lookup (very basic - really, even a hash table would do the way I made it..) https://github.com/minikomi/ansipix

Do you have any good examples of calculating the right foreground/background colors to use?


the blending sounds like a good idea, even without the blocks p2pvc makes no use of background color


Also have a look at:

http://aa-project.sourceforge.net/aalib/

http://caca.zoy.org/wiki/libcaca

and possibly other clones too. Might help improve the visuals.


well it is 100x40 in number of characters and there are 10 different characters that can take on one of 256 colors. that's 4000 x (8 (256 colors can be represented in 8 bits) + 4 (10 characters in 4 bits)) = 48Kb per frame, with about 20 fps. so maybe 120 KB/s?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: