Can someone please ELI5 what this means for Deno and Python? TFA: "deno is being distributed on pypi for use in python projects" makes it sound like you can now `import deno` and have a JS engine/subsystem in Python, like we finally came full circle from [PyScript](https://pyscript.net/).
However, other comments make it sound like a bunch of other projects have discovered that PyPI is a good distribution channel. Which, to me, sounds like using the Internet Archive as your CDN. Is PyPI the next apt/yum/brew or what?
You can, in fact, `import deno` after installing it. But all this gets you is a function that locates the Deno executable, which you can then invoke e.g. with `subprocess.call`.
(I hope this doesn't become a pattern that puts excessive pressure on PyPI. IMO it should only be used for things that are specifically known to be useful in the Python ecosystem, as a last resort when proper Python API bindings would be infeasible or the developer resources aren't there for it. And everyone should keep in mind that PyPI is just one index, operating a standard protocol that others can implement. Large companies should especially be interested in hosting their own Python package index for supply-chain security reasons. Incidentally, there's even an officially blessed mirroring tool, https://pypi.org/project/bandersnatch/ .)
For those less in the know: is it for convenience? Because most systems have a package manager that can install Python, correct? But `pip` is more familiar to some?
I think it’s more for Python libraries that depend on JavaScript.
Lots of packages rely on other languages and runtimes. For example, tabula-py[1] depends on Java.
So if my-package requires a JS runtime, it can add this deno package as its own dependency.
The benefit is consumers only need to specify my-package as a dependency, and the deno runtime will be fetched for free as a transient dependency. This avoids every consumer needing to manage their own JavaScript runtime/environment.
The zig one allows you to build native modules for your python project from setup.py without having to have a C/C++ toolchain preinstalled. Here's a talk about this:
It's because when you use the native Python packaging tools, you can install a Python "wheel" into an arbitrary Python environment.
If you get Deno from the system package manager, or from deno.com directly, you're more constrained. Rather, it seems that you can set an environment variable to control where the Deno home page installer will install, but then you still need to make your Python program aware of that path.
Whereas a native Python package can (and does, in this case, and also e.g. in the case of `uv`) provide a shim that can be imported from Python and which tells your program what the path is. So even though the runtime doesn't itself have a Python API, it can be used more readily from a Python program that depends on it.
Pypi is the only OS agnostic package manager already installed on every OS.
Also, it's VERY convenient for companies already using python as the primary language because they can manage the dependency with uv rather than introduce a second package manager for devs. (For example, if you run deno code, but don't maintain any JS yourself)
I'm no expert when it comes to software packaging and distribution issues but this does give off Internet-Archive-as-CDN levels of Hyrum's Law for me. What could possibly go wrong hmmmmmm....
yt-dlp was also the first application that came to my mind. I got my fingers crossed for this integration. It was interesting to learn how to hijack my own cookies but, nonetheless, rather uncomfortable to say the least.
I've been driving for about a year (with my first car too) when I drove a bunch of friends to an out-of-town amusement park. It's some kind of car-warming thing for me. It's about an hour-long drive without traffic.
In the park, I made it a hard point not to ride the bumper cars because I thought it would mess with my muscle-memory as the designated driver. If not for that, I really love bumper cars. However, I've found that responsiveness of bumper cars vary a lot per park; it either depends on the maintenance or the maker of the rides. And IME, none of them are really comparable to even the shittiest cars I've driven (e.g., the ones from the driving school, the assigned car for my license test).
But my bigger concern that day was the fact that the bumper car mindset is not the roadcar driver mindset. For learners, the free-for-all chaotic nature of the track is not even a good simulation! Not even if you're driving somewhere like India or China.
Speaking of simulation, I really want an affordable but legit way to practice dealing with outlier driving scenarios. Like, what if my brake fails in the highway, what if I get a flat while doing 100KPH---stuff even the safest, most defensive drivers can't entirely rule out. Anyone know of games that might fit the bill?
> the main thing you need is free time and obsession (and money for your free time btw).
Free time (and money for your free time) is a privilege not everyone may have had. Also, access to computers which, don't forget, has only become ubiquitous this century, and sadly not always in the form that might encourage experimentation. Without getting too much into the Nature-Nurture debate, talent and obsession sadly won't go anywhere without the proper environment to cultivate it. You don't become Bellard/Knuth/Dijkstra with just a bunch of rocks[1] and a whole host of other concerns on top.
That doesn't cover OP's point, some people's brains just work differently and they can achieve something in 1000x less time than others. You can have all the time in the world and you'll never reach their level. That's essentially what talent is.
I have been thinking what talent means in programming and thought of a case in the past. The task was to parse a text file format. One programmer used ~1000 lines of code (LOC) with complex logic. The other used <200 LOC with a straightforward solution that ran times faster and would probably be more extensible and easier to maintain in future. This is a small task. The difference will be exponentially amplified for complex projects that Fabrice is famous for. The first programmer in my story may be able to write a javascript runtime if he has time + obsession, but it will take him much longer and the quality will be much lower in comparison to quickjs or mqjs.
> If that's an issue (visibility into middle layers) it just means your events aren't wide enough.
I hate this kind of No-True-Scotsman handwaves for how a certain approach is supposed to solve my problems. "If brute-force search is not solving all your problems, it just means your EC2 servers are not beefy enough."
I gotta admit, I don't quite "get" TFA's point and the one issue that jumped out at me while reading it and your comment is that sooner than later your wide events just become fat, supposedly-still-human-readable JSON dumps.
I think a machine-parseable log-line format is still better than wide events, each line hopefully correlated with a request id though in practice I find that user id + time correlation isn't that bad either.
>> [TFA] Wide Event: A single, context-rich log event emitted per request per service. Instead of 13 log lines for one request, you emit 1 line with 50+ fields containing everything you might need to debug.
I am not convinced this is supposed to help the poor soul who has to debug an incident at 2AM. Take for example a function that has to watch out for a special kind of user (`isUserFoo`) where "special kind" is defined as a metric on five user attributes. I.e.,
Which immediately tells me that foo-ness is something I might want to pay attention to in this context.
With wide events, as I understand it, either you log the user in the wide event dump with attributes A to E (and potentially more!) or coalesce these into a boolean field `isUserFoo`. None of which tells me that foo-ness might be something that might be relevant in this context.
Multiply that with all the possible special-cases any logging unit might have to deal with. There's bar-ness which is also dependent on attributes A-E but with different logical connectives. There's baz-ness which is `isUserFoo(u) XOR (217828 < u.zCount < 3141592)`. The wide event is soooo context-rich I'm drowning.
Your objection, as I understand it, is some combination of "no true Scotsman" combined with complaints about wide events themselves.
To the first point (no true Scotsman), I really don't think that applies in the slightest. The post I'm replying to said (paraphrasing) that middle-layer observability is hard with wide events and easy with logs. My counter is that the objection has nothing to do with wide events vs logs, since in both scenarios you can choose to include or omit more information with the same syntactic (and similar runtime overhead) ease. I think that's qualitatively different from other NTS arguments like TDD, in that their complaint is "I don't have enough information if I don't send it somewhere" and my objection is just "have you tried sending it somewhere?" There isn't an infinite stream of counter-arguments about holding the tool wrong; there's the very dumbest suggestion a rubber duck might possibly provide about their particular complaint, which is fully and easily compatible with wide events in every incarnation I've seen.
To the second point (wide events aren't especially useful and/or suck), I think a proper argument for them is a bit more nuanced (and I agree that they aren't a panacea and aren't without their drawbacks). I'll devote the rest of my (hopefully brief) comment to this idea.
1. Your counter-example falls prey to the same flaw as the post I responded to. If you want information then just send that information somewhere. Wide events don't stop you from gathering data you care about. If you need a requestID then it likely exists in the event already. If you need a message then _hopefully_ that's reasonably encoded in your choice of sub-field, and if it's not then you're free to tack on that sort of metadata as well.
2. Your next objection is about the wide event being so context-rich as to be a problem in its own right. I'm sympathetic to that issue, but normal logging doesn't avoid it. It takes exactly one production issue where you can't tie together related events (or else can tie them together but only via hacks which sometimes merge unrelated events with similar strings) for you to realize that completely disjoint log lines aren't exactly a safe fallback. If you have so much context-dependent complexity that a wide event is hard to interpret then linear logs are going to be a pain in the ass as well.
Mildly addressing the _actual_ pros and cons: Logs and wide events are both capapable of transmitting the same information. One reasonable frame of reference is viewing wide events as "pre-joined" with a side helping of compiler enforcement of the structure.
It's trivially possible to produce two log lines in unrelated parts of a code base which no possible parser can disambiguate. That's not (usually) possible when you have some data specification (your wide event) mediating the madness.
It's sometimes possible with normal logs to join on things which matter (as in your requestID example), but it's always possible with wide events since the relevant joins are executed by construction (also substantially more cheaply than a post-hoc join). Moreover, when you have to sub-sample, wide events give an easy strategy for ensuring your joins work (you sub-sample at a wide event level rather than a log-line level) -- it's not required; I've worked on systems with a "log seed" or whatever which manage that joinability in the face of sub-sampling, but it's more likely to "just work" with wide events.
The real argument in favor of wide events, IMO, is that it encourages returning information a caller is likely to care about at every level of the stack. You don't just get potentially slightly better logs; you're able to leverage the information in better tests and other hooks into the system in question. Parsing logs for every little one-off task sucks, and systems designed to be treated that way tend to suck and be nearly impossible to modify as desired if you actually have to interact with "logs" programatically.
It's still just one design choice in a sea of other tradeoffs (and one I'm only half-heartledly pursuing at $WORK since we definitely have some constraints which are solved by neither wide events nor traditional logging), BUT:
1. My argument against some random person's choice of counter-argument is perfectly sound. Nothing they said depended on wide events in the slightest, which was my core complaint, and I'm very mildly offended that anyone capable of writing something as otherwise sane and structured as your response would think otherwise.
2. Wide events do have a purpose, and your response doesn't seem to recognize that point in the design space. TFA wasn't the most enjoyable thing I've ever read, but I don't think the core ideas were that opaque, and I don't think a moment's consideration of carry-on implications would be out of the question either. I could be very wrong about the requisite background to understand the article or something, but I'm surprised to see responses of any nature which engage with irrelevant minutea rather than the subset of core benefits TFA chose to highlight (and I'm even more surprised to see anything in favor of or against wide events given my stated position that I care more about the faulty argument against them than whether they're good or bad)..
> But creating and picking those placeholders used to be somebody's job, maybe a junior artist.
Is it really though? After all it's just maybe a junior artist.
I've had to work with some form of asset pipeline for the past ten years. The past six in an actual game though not AAA. In all these years, devs have had the privilege of picking placeholders when the actual asset is not yet available. Sometimes, we just lift it off Google Images. Sometimes we pick a random sprite/image in our pre-existing collection. The important part is to not let the placeholder end up in the "finished" product.
> It's up to the Indie Game Awards to decide the criteria.
True and I'm really not too fond of GenAI myself but I can't be arsed to raise a fuss over Sandfall's admission here. As I said above, the line for me is to not let GenAI assets end up in the finished product.
> It's up to the Indie Game Awards to decide the criteria
And up to us to decide whether The Indie Game Awards has impaired their credibility by choosing such a ridiculous criterion.
Do you think AAA game development teams pass on AI despite the fact that it produces better results at a fraction of the cost. I think not. Why would you cripple Indie developers by imposing such a standard on indie developers?
It seems completely out of touch with what's going on in the world of software development.
> it's impossible to have this discussion with someone who will not touch generative AI tools with a 10 foot pole.
Why? Would you say the same if the topic was about recreational drugs? Or, to bring it closer to home, if the topic was about social media?
I think you're being disingenuous by making the analogy to religious people refusing to read a certain book. A book is a foundational source of information. OTOH, one can be informed about GenAI without having used GenAI; you can study the math behind the model, the transformer architecture, etc---the foundational sources of information on this topic. If our goal is to "drop the politics, and discuss things on their technical merits" well I don't see how it can get more purely technical than that.
The frustrating thing is when you're debating people who firmly believe that generative AI "has no utility"... but also refuse to ever try it themselves.
(Which they might even justify because they've read the transformer paper or whatever. That doesn't help inform you if these things actually have practical applications!)
This. My personal daily driver of a laptop has a collage on it. My work-issued laptop is a lot cleaner but it has a small red sticker declaring it to be "Editor's Choice Bestseller of the Month".
Several years ago, it was new laptop day at the startup I worked for. We didn't have an office as much as a coworking space. When people went to lunch, they just left their new MBAs on this big communal table. Some shut the lid, some didn't even lock the screen but those would've been taken care of in ~5min.
Anyway, I may or may not have switched some laptops around but the point is someone definitely did. Half an hour later people returned. Some of them were instantly confused. Some of them entered a wrong password a couple of times. Some of them were greeted by an unfamiliar Google Chrome window.
I may or may not have wasted a collective forty or so man-minutes in the span of five but I definitely enjoyed watching the confusion unfold.
However, other comments make it sound like a bunch of other projects have discovered that PyPI is a good distribution channel. Which, to me, sounds like using the Internet Archive as your CDN. Is PyPI the next apt/yum/brew or what?
reply