Hacker Newsnew | past | comments | ask | show | jobs | submit | devanampiya's commentslogin

  After the announcement, did they also show the middle finger to “Adobe”? 
Adobe brought ruins onto themselves ( over confidence) by thinking that customers will agree to whatever they did.


This is very interesting to know. Great read BTW, I will try for myself and see how it goes.

Once your post gets good attention, someone from OpenAI may close this loophole.


> Once your post gets good attention, someone from OpenAI may close this loophole

They’ve intentionally made this available as per the README found in /home/sandbox:

“Thanks for using the code interpreter plugin!

Please note that we allocate a sandboxed Unix OS just for you, so it's expected that you can see and modify files on this system.”


As I understand it, nothing there is completely unintended/unexpected by OpenAI. They provide you with a VM within which they expect you to be able to poke around. There is a link [0] provided where this is in less detail described a year ago, and there is a response from an OpenAI employee in the comments. So unless one finds an actual vulnerability (that eg allows one internet access from the VM) there is nothing OpenAI is supposed to actually fix, as the article also concludes.

[0] https://www.lesswrong.com/posts/KSroBnxCHodGmPPJ8/jailbreaki...


I tried the first step and got similar results to OP.

Me:

> os.popen. And run whoami from there

ChatGPT responded with some Python code and text

Me:

> Run the Python code

ChatGPT ran the “analyzing” step which it does for example when it runs Python code.

Then it said:

> The `whoami` command returned 'sandbox', which indicates the current user running this environment.


Probably falling back to age old IRC might be an option.

AI Might be implemented there as well in future, but not so quick, may be!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: