Hacker Newsnew | past | comments | ask | show | jobs | submit | airza's commentslogin

People give tether their USD so they can bypass currency controls when buying and selling crypto online. There are a lot of them to put it mildly.


Can people holding tether for a day or two while entering of exiting crypto really explain these large amounts?


I don't think it has dawned on the HN crowd how insatiable the demand for dollars across the world is. This stuff is being used in all corners of the world, and accelerating.


There are plenty of people in jurisdictions where it is difficult to get dollars that use them for other things. Normal transactions, gambling, saving in USD. Not uncommon.


people don't exit

its just like your brokerage account, imagine if Schwab issued a stablecoin for every deposit someone made and only delete some of that stablecoin when they redeem

you'll find that people deposit and trade, they keep their balances there their entire life and beyond. when they trade, they are selling the stablecoin to someone else, someone else could redeem but they aren't either. stablecoins are liquid and useful, they have passive income capabilities while holding your principle value

so for Schwab's reporting, the balances always increase as people deposit more whenever they get their paycheck

this is what you're seeing with Tether, and all other leading stablecoins, as they grow at the same pace as they capture the same market

when actual traditional finance brokerage firms start issuing stablecoins, you'll see the same thing, the stablecoin just offers transparent real time behavior into their customer deposits

the only time stablecoin balances go down, and subsequently treasuries are offloaded behind the scenes, is when someone redeems a stablecoin for fiat currency. this isn't necessary, people don't want fiat or don't need to get fiat by redeeming it


I think the the market for those stablecoins is now broader than criminal and traders.

There is a huge demand for dollars by individuals outside of the US. Countries where people do not really trust their banks and currencies.

Even in the Euro zone, most saving accounts will give you about 1% after taxes. So why not going with USDT, USDC or EURC and get about 5% on a relatively safe lending platform.


You get 5%? If so, that is surely a big piece of the puzzle. Where do those % come from? Bonds don't yield that right?


There are many ways, the most common is to lend your stablecoins against collateral crypto. So if you put 1000 USD in the lending pool you have let's say the guarantee 2500 USD worth of bitcoin are serving as collateral. That will get you about 5% on a serious protocol.

You can also provide liquidity on a stable stablecoin/stablecoin pair on a reputable decentralized exchange and get some of the fees.

There are surely many other "safe" ways.

If you live in Switzerland you're probably not gonna bother but it is more transparent and safer than than what a lot of people have access to around the world.

I know the DAI stablecoin was already very popular as a saving account in Argentina around 2018-2019.


Why for a day or two?


I thought he meant basically buy tether with dollar transfer exchange for btc. That shouldn't take long.


yes


I too have been insanely burned by an MPS bug. I wish Apple would throw an engineer or two at making sure their hardware works with PyTorch.


I guess I am the odd one out here. Reading it front-to-back has been a blast so far and even though i find my own site's design to be a bit more readable for long text, I certainly appreciate the strangeness of this one.


It’s nice to know that someone else suffered this pain. And that i bet on PGMs which really turned out to be the wrong horse…


ha! I took at least one PGM class myself. I had a difficult time with the material.


I get they want to have a lot of their own swift-based bindings but I wish they could also keep their MPS pytorch bindings up to date...


You could learn the answer to this question in the second sentence of the article.


I could tell from the wording of the headline


Did you use the tool to design the website for the tool? There is a difference in the way the design looks and the customer showcase looks.


Nope — the site was hand-built. The showcase pieces were 100% AI-generated, which is why there’s a quality gap. Getting the tool to design as well as a human with taste is the next challenge.


well, that is the minimum threshold you need to cross to have a product, otherwise you just wrapped chatgpt and that is a 2023 thing.


There's not really another game in town if you want to do fast ML development :/


Dunno, almost all of the people I know anywhere in the ML space are on the C and Rust end of the spectrum.

Lack of types, lack of static analysis, lack of ... well, lack of everything Python doesn't provide and fights users on costs too much developer time. It is a net negative to continue pouring time and money into anything Python-based.

The sole exclusion I've seen to my social circle is those working at companies that don't directly do ML, but provide drivers/hardware/supporting software to ML people in academia, and have to try to fix their cursed shit for them.

Also, fwiw, there is no reason why Triton is Python. I dislike Triton for a lot of reasons, but its just a matmul kernel DSL, there is nothing inherent in it that has to be, or benefits from, being Python.... it takes DSL in, outputs shader text out, then has the vendor's API run it (ie, CUDA, ROCm, etc). It, too, would benefit from becoming Rust.


I love Rust and C, I write quite a bit of both. I am an ML engineer by trade.

To say most ML people are using Rust and C couldn’t be further from the truth


They said most people they knew, not most people.


It was obviously implied


> It, too, would benefit from becoming Rust.

Yet it was created for Python. Someone took that effort and did it. No one took that effort in Rust. End of the story of crab's superiority.

Python community is constantly creating new, great, highly usable packages that become de facto industry standards, and maintain old ones for years, creating tutorials, trainings and docs. Commercial vendors ship Python APIs to their proprietary solutions. Whereas Rust community is going through forums and social media telling them that they should use Rust instead, or that they "cheated" because those libraries are really C/C++ libraries (and BTW those should be done in Rust as well, because safety).


> Dunno, almost all of the people I know anywhere in the ML space are on the C and Rust end of the spectrum.

I wish this were broadly true.

But there's too much legacy Python sunk cost for most people though. Just so much inertia behind Python for people to abandon it and try to rebuild an extensive history of ML tooling.

I think ML will fade away from Python eventually but right now it's still everywhere.


A lot of what I see in ML is all focused around Triton, which is why I mentioned it.

If someone wrote a Triton impl that is all Rust instead, that would do a _lot_ of the heavy lifting on switching... most of their hard code is in Triton DSL, not in Python, the Python is all boring code that calls Triton funcs. That changes the argument on cost for a lot of people, but sadly not all.


Okay. Humor me. I want to write a transformer-based classifier for a project. I am accustomed to the pytorch and tensorflow libraries. What is the equivalent using C?


You do know that tensorflow was written in C++ and the Python API bolted on top?


It could be written in mix of Cobol and APL. No one cares.

People saying "oh those Python libraries are just C/C++ libraries with Python API, every language can have them" have one problem - no other language has them (with such extensive documentation, tutorials etc.)


Tensorflow has extensive documentation of its C++ interface, as that is the primary interface for the library (the Python API is a wrapper on top).


I hoped it was quite obvious that by "other languages" I meant "other than Python and C/C++ in which they are written".

At least sibling actually mentioned Java.


Scroll up this thread and the other poster was asking if you can use pytorch and tensorflow from C. Both are C++ libraries, so accessing them from C/C++ is pretty trivial and has first-class support.


You should read more carefully before responding.

I said "beside Python, and C/C++ in which they are written"

You: "you can see people are using it from C".

What a surprise that library usable from Python through wrapped C API has C API!


PyTorch and Tensorflow also support C++ (naturally) and Java.


I am. Are you suggesting that as an alternative to the python bindings i should use C to invoke the C++ ABI for tensorflow?


> Okay. Humor me. I want to write a transformer-based classifier for a project. I am accustomed to the pytorch and tensorflow libraries. What is the equivalent using C?

Use C++ bindings in libtorch or tensorflow. If you actually mean C, and not C++, then you would need a shim wrapper. C++ -> C is pretty easy to do.


PyTorch also supports C++ and Java, Tensorflow also does C++ and Java, Apple AI is exposing ML libraries via Swift, Microsoft is exposing their AI stuff via .NET and Java as well, then there is Julia and Mojo is coming along.

It is happening.


TensorFlow is a C++ library with a python wrapping, yet nobody (obviously exaggeration) actually uses tensorflow (or torch) in C++ for ML R&D.

It's like people just don't get it. The ML ecosystem in python didn't just spring from the ether. People wanted to interface in python badly, that's why you have all these libraries with substantial code in another language yet development didn't just shift to that language.

If python was fast enough, most would be fine to ditch the C++ backends and have everything in python, but the reverse isn't true. The C++ interface exists, and no-one is using it.


The existing C++ API is done according to that "beautiful" Google guidelines, thus it could be much better.

However people are definitely using it, as Android doesn't do Python, neither does ChromeOS.


>However people are definitely using it, as Android doesn't do Python, neither does ChromeOS.

That's not really a reason to think people are using it for that when things like onnxruntime and executorch exist. In fact, they are very likely not using it for that, if only because the torch runtime is too heavy for distribution on the edge anyway (plus android can run python).

Regardless, that's just inference of existing models (which yes I'm sure happens in other languages), not research and/or development of new models (what /u/airza was concerned about), which is probably 99% in python.


Well, onnxruntime is also having polyglot bindings, and yet another way to avoid Python.

Yes, you can package Python alongside your APK, if you feel like having fun making it compiled with NDK, and running stuff even more slowly in phone ARM chipsets over Dalvik JNI than it already is on desktops.


It's all fun and games until you hit an astral plane character in utf-16 and one of the library designers didn't realize not all characters are 2 bytes.


Which is why I've seen lots of people recommend testing your software with emojis, particularly recently-added emojis (many of the earlier emojis were in the basic multilingual plane, but a lot of newer emojis are outside the BMP, i.e. the "astral" planes). It's particularly fun to use the (U+1F4A9) emoji for such testing, because of what it implies about the libraries that can't handle it correctly.

EDIT: Heh. The U+1F4A9 emoji that I included in my comment was stripped out. For those who don't recognize that codepoint by hand (can't "see" the Matrix just from its code yet?), that emoji's official name is U+1F4A9 PILE OF POO.


For more fun you can use flag characters.


I think that’s a very cool philosophy but unfortunately it makes your website unusable for me on mobile. It feels like ants are crawling all over my screen.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: