Doesn't seem especially out of the norm for a large conference. Call it 10,000 attendees which is large but not huge. Sure; not everyone attending puts in a session proposal. But others put multiple. And many submit but, if not accepted don't attend.
Can't quote exact numbers but when I was on the conference committee for a maybe high four figures attendance conference, we certainly had many thousands of submissions.
The problem isn't only papers it's that the world of academic computer science coalesced around conference submissions instead of journal submissions. This isn't new and was an issue 30 years ago when I was in grad school. It makes the work of conference organizes the little block holding up the entire system.
Checking each citation one by one is quite critical in peer review, and of course checking a colleagues paper. I’ve never had to deal with AI slop, but you’ll definitely see something cited for the wrong reason. And just the other day during the final typesetting of a paper of mine I found the journal had messed up a citation (same journal / author but wrong work!)
Is it quite critical? Peer review is not checking homework, it's about the novel contribution presented. Papers will frequently cite related notable experiments or introduce a problem that as a peer reviewer in the field I'm already well familiar with. These paragraphs generate many citations but are the least important part of a peer review.
(People submitting AI slop should still be ostracized of course, if you can't be bothered to read it, why would you think I should)
Fair point. In my mind it is critical because mistakes are common and can only be fixed by a peer. But you are right that we should not miss the forest through the trees and get lost on small details.
It’s a shame because “improve an off the shelf llm ti translate in line with this large dataset we prepared” is precisely the kind of project people love to work on. It could have been a chance to immortalize the hard work they did up until now.
You mean the two+ decades of labour of love was always done to be a nameless contribution to the AI machine? Somehow I think he would have picked another hobby if he had known that back then.
Is it? I don't think you quite understood the issue.
This issue is specifically centred around the human element of the work and organisation. The translators were doing good work, they wanted to continue that work. Why it's important that the work done is by a human is probably only partially about quality of output and likely more about authenticity of output. The human element is not recorded in the final translation output, but it is important to people that they know something was processed by a human who had heart and the right intentions.
> The human element is not recorded in the final translation output, but it is important to people that they know something was processed by a human who had heart and the right intentions
Not that I entirely disagree with the conclusion here, but…
It feels like that same sentiment can be used to justify all sorts of shitty translation output, like a dialog saying cutesy “let’s get you signed in”, or having dialogs with “got it” on the button label. Sure, it’s so “human” and has “heart”, but also enrages me to my very core and makes me want to find whoever wrote it and punch them in the face as hard as I can.
I would like much less “human” in my software translations, to be honest. Give me dry, clear, unambiguous descriptions of what’s happening please. If an LLM can do that and strike a consistent tone, I don’t really care much at all about the human element going into it.
Oh I wasn't really referring to tone or language like that, I also don't particularly like it and prefer concise clear language. While LLMs can totally achieve that, I want to know a human decided to do it that way. At some point this mindset is going to look very silly, and perhaps even more so for software. But ultimately it's a human feeling to want that and humans are also not deterministic or logical.
If there really is enough market demand for this kind of processor, it seems like someone like NEC who still makes vector processors would be better poised than a startup rolling RISC-V
So, a Systolic Array[1] spiced up with a pinch of control flow and a side of compiler cleverness? At least that's the impression I get from the servethehome article linked upthead. I wasn't able to find non-marketing better-than-sliced-bread technical details from 3 minutes of poking at your website.
I can see why systolic arrays come to mind, but this is different.
While there are indeed many ALUs connected to each other in a systolic array and in a data-flow chip, data-flow is usually more flexible (at a cost of complexity) and the ALUs can be thought of as residing on some shared fabric.
Systolic arrays often (always?) have a predefined communication pattern and are often used in problems where data that passes through them is also retained in some shape or form.
For NextSilicon, the ALUs are reconfigured and rewired to express the application (or parts of) on the parallel data-flow acclerator.
My understanding is no, if I understand what people mean by systolic arrays.
GreenArray processors are complete computers with their own memory and running their own software. The GA144 chip has 144 independently programmable computers with 64 words of memory each. You program each of them, including external I/O and routing between them, and then you run the chip as a cluster of computers.
Text on the front page of the NS website* leads me to think you have a fancy compiler: "Intelligent software-defined hardware acceleration". Sounds like Cerebras to my non-expert ears.
NEC doesn't really make vector processors anymore. My company installed a new supercomputer built by NEC, and the hardware itself is actually Gigabyte servers running AMD Instinct MI300A, with NEC providing the installation, support, and other services.
That’s the irony of the situation. This should’ve been a clear win for Trump, using the prize to help bolster his status and direction on Venezuela. But then we got this absurd media storyline about him wanting the prize himself (probably to bury the government shut down news).
It’s mostly conservatives IME going “Trump deserves the peace prize” while also saying “the peace prize is meaningless because Obama got one.” Depending on the context and who they are talking to they emphasize one side of that or the other.
My only critique of this is that maybe the countries they compared started to invested in safer urban driving infrastructure during the Lehman shock, and its counteracting the universal growth in distractedness
Anecdotal, but I've been in some cities where pedestrians don't even look, they just walk right into the road. Yes, I would be at fault if I hit them (in many cases) but I'm also not perfect, and also don't expect them to charge right in front of me.)
Living in Boston 30-something years ago, I found this was required as a pedestrian, because drivers would try to intimidate you from entering a crosswalk by accelerating at you. So... you had to explicitly look away and still be aware of their presence.
(Not just Boston, I've seen this in some other cities since.)
One of the reasons to force slower speeds in city streets, more time for reacting to adverse events, less damage in case you hit a pedestrian at 30km/h than 60km/h.
> Gambling thrives in contexts where a ladder to success doesn’t exist or is perceived as not existing.
This is a neat moral message… but is it really true? Gambling is addictive, so the reality might be that even without such deep social problems you get similar levels
Brenner, Brenner, and Brown wrote A World of Chance in which they draw from large reams of data and conclude that gambling is often used by those that see no other way up.
Gambling is addictive for some, in the same way that alcohol is addictive for some, yet blanket alcohol prohibition is not considered a great idea in hindsight.
I don't know how we could put limits on gambling that would make sense, though. There's a huge difference in bets that I used to make (which were all black-market sports bets, usually on 'game winner' or over/under) ~ once a week during the NFL season vs. the shit going on with FanDuel and all these phone-based gamified services. And that stuff absolutely encourages you to make bets that you can't afford and can easily turn into a problem even for someone who isn't 'addicted' per se -- it's like the predatory loot box model from video games.
TLDR I don't know how you write a law that would put hard and fast limits on what can be bet on and how much an individual is allowed to bet during a week in a way that would be palatable to the companies. I'm in favor of the blanket ban at this point; the black market for betting has always existed and it was better than the current setup.
You could make it so that if someone called the gambling hotline then they automatically are suspended from betting at all sports books for a year or something. Idk if this is the perfect policy but it took me about 5 minutes to come up with it.
I think banning or severely limiting advertising similar to cigarettes would be a good start. Stop having sports broadcasts be so intertwined with gambling, seeing odds on the screen when watching sports is gross.
The smartass but not wrong answer is to check rates of gambling of India when the caste system was at its strictest or in Soviet Russia. If the answer is no without major effort into suppression it may be a counter example.
reply