Hacker Newsnew | past | comments | ask | show | jobs | submit | frakt0x90's commentslogin

I imagine a lot of people didn't even click. They see free Zig materials and upvote.


This is just not interesting. A dictator can force people to do things.


And everyone else.


And we are very grateful


Except a decent number of those examples have legally binding oaths not to screw you. And if they are found to have screwed you, are barred from their profession and have to pay you a lot of money. Maybe that should be more common.


I know nothing about theorem provers. Is the idea that you can prove a large number of simpler statements that build on each other until it all implies the top level theorem you're proving (like how IRL math works)? That way you don't have to understand the gargantuan computer program all at once?

It just seems like it would be as hard to verify the accuracy of the code written to prove a complex theorem like FLT as a manuscript written in English. But if you can rely on smaller statements that build on each other, it would make more sense.


At least for theorem proovers like Lean, they're basically type checkers but for much more sophisticated type systems than you find in your average programming language. Basically, if the proof compiles, it is proven. In fact, if you look at Lean code it is very similar to Haskell (and the book Functional Programming with Lean is a great way to explore the language from that lens [0])

You can also think of this in reverse and it might help you understand better: Your type checker at compile time is basically providing a sort of "proof" that all the functions and arguments in your program are consistent. Of course, because the type system is not as sophisticated, it can't prove your program is correct, but it as least can prove something about how your program will behave. If you had a more advanced type system, you could, in fact, prove more sophisticated things about the performance of your code (for example that the shapes of all your matrices matched).

A great book on this topic is Type Theory and Formal Proof [1].

0. https://lean-lang.org/functional_programming_in_lean/

1. https://www.cambridge.org/core/books/type-theory-and-formal-...


If I understand correctly, the only parts you'd need to verify are the core of the theorem prover, called the "kernel" (which is intentionally kept very small), and possibly the code in charge of translating the theorem statement into the language of that core (or you could look directly at the translated theorem statement, which for FLT I'd imagine may be smaller than the code to translate it). Those pieces are orders of magnitude smaller than the proof itself.


You're basically right, with one teensy caveat: you can't prove the kernel is correct. No model can prove that it is itself consistent, and if you use a more powerful model to prove consistency you're just shifting the goalposts - an inconsistent powerful model can show any model to be consistent.

The kernels are kept small so that we can reasonably convince ourselves that they're consistent.

The expression of FLT in Lean isn't complicated, because FLT isn't complicated; the kernel language of Lean isn't all that far from the surface language; and the expression "∀ a b c n : ℕ, n ≥ 3 → a ≠ 0 → b ≠ 0 → c ≠ 0 → a^n + b^n ≠ c^n" only needs the infix operators converted to lambda applications (though "only" does a lot of lifting here!)


Correctness of the kernel and consistency of the theory implemented in it are different things. Gödel’s theorems prevent you from proving the latter, but not the former.


Interesting - what is correctness of the kernel here? That it faithfully implements the model?


You can think of the formal proofs that mathematicians write as essentially pseudocode, as compared to Lean which is actual code that can be compiled. As an example, it would be common to say “2*x + 4 = 0, therefore x is -2”, in a mathematical proof. But in a computer-readable proof, you have to actually go through the algebraic manipulations, citing what you are doing and what theorem/lemma/rule you are applying. Modern systems like Lean make that much easier, but that’s the essential difference.

Advanced proofs essentially just consist of a series of assertions “X, therefore Y, therefore Z, …” and the justification for that isn’t always laid out explicitly. As a result, when you read a formal proof, it often takes some work to “convince yourself” the proof is valid. And if a proof has a mistake, it’s probably not is one of those assertions, but rather in how you get from assertion X to assertion Y. It can often be really subtle.

Disclaimer: I have a math undergraduate, and have played around with the theorem proving language Coq, but haven’t worked with Lean


From the little I know of, language based provers like Lean allow mathematicians to specify the steps of a proof in a precise language format that can be “run” by a machine assisted prover to prove its validity. It automates the verification and makes proofs reproducible.

In the past when proofs were done by hands, little mistakes or little changes could lead to a complete failure of the proof. Mathematicians spent weeks or months to redo the steps and recheck every little detail.

Machine assisted prover basically raises the abstraction level for theorem proving. People don’t need sweat about little details and little changes in the proof steps.

Language based machine provers also enable wider corroboration as a problem can be subdivided into smaller problems and farmed out to different people to tackle, perhaps to different experts for different parts of the problem. Since everyone uses the same language, the machine can verify each part and the overall proof once the parts come together.


The way it works is that if you trust the theorem prover to be bug free, then you only have to verify that the theorem was correctly stated in the program. The proof itself becomes irrelevant and there is no need to verify it.


Humans still have to state the goal and write a proof of it, but the proof is computer-verified. It's not irrelevant, except in the sense that any two different ways to prove the same statement are equivalently valid proofs.


More or less! Just like when writing a typical program or indeed mathematical proof, you have a lot of freedom in how you break up the problem into subproblems or theorems (and rely on “libraries” as results already proven). Traditionally the problem with these theorem-proving environments was that the bulk of even relatively basic theorems, let alone the state-of-the-art, had yet to be formalized, so you weren’t quite starting from the axioms of set theory every time but close enough. Lean has finally seen enough traction that formalizing highly non-trivial results starts to be possible.


If you want to get an idea of what proving things in Lean is like you could try this super fun game, which I think is by one of the people behind the FLT formalization project:

https://adam.math.hhu.de/#/g/leanprover-community/nng4


1: yes, it is split into proofs of many different propositions, with these building on each-other.

2: The purpose of this is largely for the “writing it” part, not to make checking it easier. The computer checks the validity of the proof. (Though a person of course has to check that the formal statement of the result shown, is the statement people wanted to show.)


If you’re interested, here are some really fun games/puzzles to learn the basics of Lean: https://adam.math.hhu.de/


To me, using AI to convert an idea or paper into working code is outsourcing the only enjoyable part of programming to a machine. Do we not appreciate problem solving anymore? Wild times.


i'm an undergrad, so when i need to implement a paper, the idea is that i'm supposed to learn something from implementing it. i feel fortunate in that ai is not yet effective enough to let me be lazy and skip that process, lol


When I was younger, we all had to memorize phone numbers. I still remember those numbers (even the defunct ones) but I haven't learned a single new number since getting a cellphone.

When I was younger, I had to memorize how to drive to work/the grocery store/new jersey. I still remember those routes but I haven't learned a single new route since getting a smartphone.

Are we ready to stop learning as programmers? I certainly am not and it sounds like you aren't either. I'll let myself plateau when I retire or move into management. Until then, every night debugging and experimenting has been building upon every previous night debugging and experimenting, ceaselessly progressing towards mastery.


I can largely relate... that said, I rarely rely on my phone for remembering routes to places I've been before. It does help that I've lived in different areas of my city and suburbs (Phoenix) so I'm generally familiar with most of the main streets, even if I haven't lived on a given side of town in decades.

The worst is when I get inclined to go to a specific restaurant I haven't been to in years and it's completely gone. I've started to look online to confirm before driving half an hour or more.


I noticed this also, and ever since, I've made it a point to always have memorized my SO's number and my best friend's number.


*Outsourcing to a parrot on steroids which will make mistakes, produce stale ugly ui with 100px border radius, 50px padding and rainbow hipster shadows, write code biased towards low quality training data and so on. It's the perfect recipe for disaster


Over the top humor duly acknowledged.

Disastrous? Quite possibly, but my concerns are based on different concerns.

Almost everything changes, so isn’t it better to rephrase these statements as metrics to avoid fixating on one snapshot in an evolving world?

As the metrics get better, what happens? Do you still have objections? What objections remain as AI capabilities get better and better without limit? The growth might be slow or irregular, but there are many scenarios where AIs reach the bar where they are better at almost all knowledge work.

Stepping back, do you really think of AI systems as stochastic parrots? What does this metaphor buy you? Is it mostly a card you automatically deal out when you pattern match on something? Or does serve as a reusable engine for better understanding the world?

We’ve been down this road; there is already much HN commentary on the SP metaphor. (Not that I recommend HN for this kind of thing. This is where I come to see how a subset of tech people are making sense of it, often imperfectly with correspondingly inappropriate overconfidence.)

TLDR: smart AI folks don’t anchor on the stochastic parrots metaphor. It is a catchy phrase and helped people’s papers get some attention, but it doesn’t mean what a lot of people think it means. Easily misunderstood, it serves as a convenient semantic stop sign so people don’t have to dig in to the more interesting aspects of modern AI systems. For example: (1) transformers build conceptual models of language that transcend any particular language. (2) They also build world models with spatial reasoning. (3) Many models are quite resilient to low quality training data. And more.

To make this very concrete: under the assumption of universal laws of physics, people are just following the laws of physics, and to a first approximation, our brains are just statistical pattern matchers. By this definition, humans would also be “stochastic parrots”. I go all this trouble to show that this metaphor doesn’t cut to the heart of the matter. There are clearer questions to ask: they require getting a lot more specific about various forms and applications of intelligent behavior. For example

- under what circumstances does self play lead to superhuman capability in a particular domain?

- what limits exist (if any) in the self supervised training paradigm used for sequential data? If the transformer trained in this way can write valid programs then it can create almost any Turing machine; limited only by time and space and energy. What more could you want? (Lots, but I’m genuinely curious as to people’s responses after reflecting on these.)


Until the thing can learn on its own and advance its capabilities to the same degree that a junior developer can, it is not intelligent enough to do that work. It doesn't learn our APIs, it doesn't learn our business domain, it doesn't learn from the countless mistakes I correct it on. What we have now is interesting, it is helping sometimes and wasteful others. It is not intelligent.


> It is not intelligent.

Which of the following would you agree to... ?

1. There is no single bar for intelligence.

2. Intelligence is better measured on a scale than with 1 bit (yes/no).

3. Intelligence is better considered as having many components instead of just one. When people talk about intelligence, they often mean different things across domains, such as emotional, social, conceptual, spatial, kinetic, sensory, etc.

4. Many researchers have looked for -- and found -- in humans, at least, some notions of generalized intellectual capability that tends to help across a wide variety of cognitive tasks.

If some of these make sense, I suggest it would be wise to conclude:

5. Reasonable people accentuate different aspects and even definitions of intelligence.

6. Expecting a yes/no answer for "is X intelligent?" without considerable explanation is approximately useless. (Unless it is a genuinely curious opener for an in-depth conversation.)

7. Asking "is X intelligent?" tends to be a poorly framed question.


> Until the thing can learn on its own and advance its capabilities to the same degree that a junior developer can, it is not intelligent enough to do that work.

This confuses intelligence with memory (or state) which tends to enable continuous learning.


Update: it might have been clearer and more helpful if I wrote this instead…

This idea of intelligence stated above seems to combine computation, memory, and self-improvement. These three concepts (as I understand them) are both different and logically decoupled.

For example, in the context of general agents, computational ability can change without affecting memory capability. Also, high computational ability does not necessarily confer self-improvement abilities. Having more memory does not necessarily benefit self-improvement.

In the case of biology, it is possible that self improvement demands energy savings and therefore sensory processing degradation. This conceptually relates to a low power CPU mode or a gasoline engine that can turn off some cylinders.


No confusion here.

This is just semantics, but you brought it up. The very first definition of intelligence provided by Webster:

1.a. the ability to learn or understand or to deal with new or trying situations : reason also : the skilled use of reason

https://www.merriam-webster.com/dictionary/intelligence


A time traveler from the future has recommended we both read or reread “Disputing Definitions” by Yudkowsky (2008).

Some favorite quotes of mine from it:

> Dictionary editors are historians of usage, not legislators of language. Dictionary editors find words in current usage, then write down the words next to (a small part of) what people seem to mean by them.

> Arguing about definitions is a garden path; people wouldn't go down the path if they saw at the outset where it led.

>> Eliezer: "Personally I'd say that if the issue arises, both sides should switch to describing the event in unambiguous lower-level constituents, like acoustic vibrations or auditory experiences. Or each side could designate a new word, like 'alberzle' and 'bargulum', to use for what they respectively used to call 'sound'; and then both sides could use the new words consistently. That way neither side has to back down or lose face, but they can still communicate. And of course you should try to keep track, at all times, of some testable proposition that the argument is actually about. Does that sound right to you?"


> This is just semantics, but you brought it up.

You told me my usage is wrong, but is not. That's the only point I'm making. Thanks for the condescending lecture though.


Ok, let’s work with that definition for this subthread. Even so, one can satisfy that definition without having the ability to:

> “advance its capabilities”

(your phrase)

An example would be a person with damaged short-term memory. And (pretty sure) an AI system without history and that cannot modify itself.


Another thing that jumps out to me is just how fluidly people redefine "intelligence" to mean "just beyond what machines today can do". I can't help wonder much your definition has changed. What would happen if we reviewed your previous opinions, commentary, thoughts, etc... would your time-varying definitions of "intelligence" be durable and consistent? Would this sequence show movement towards a clearer and more testable definition over time?

My guess? The tail is wagging the dog here -- you are redefining the term in service of other goals. Many people naturally want humanity to remain at the top of the intellectual ladder and will distort reality as needed to stay there.

My point is not to drag anyone through the mud for doing the above. We all do it to various degrees.

Now, for my sermon. More people need to wake up and realize machine intelligence has no physics-based constraints to surpassing us.

A. Businesses will boom and bust. Hype will come and go. Humanity has an intrinsic drive to advance thinking tools. So AI is backed by huge incentives to continue to grow, no matter how many missteps economic or otherwise.

B. The mammalian brain is an existence proof that intelligence can be grown / evolved. Homo sapiens could have bigger brains if not for birth-canal size constraints and energy limitations.

C. There are good reasons to suggest that designing an intelligent machine will be more promising than evolving one.

D. There are good reasons to suggest silicon-based intelligence will go much further than carbon-based brains.

E. We need to stop deluding ourselves by moving the goalposts. We need to acknowledge reality, for this is reality we are living in, and this is reality we can manipulate.

Let me know if you disagree with any of the sentences below. I'm not here to preach to the void.


> A. Businesses will boom and bust. Hype will come and go. Humanity has an intrinsic drive to advance thinking tools. So AI is backed by huge incentives to continue to grow, no matter how many missteps economic or otherwise.

Corrected to:

A. Businesses will boom and bust. Hype will come and go. Nevertheless, humanity seems to have an intrinsic drive to innovate, which means pushing the limits of technology. People will seek more intelligent machines, because we perceive them as useful tools. So AI is pressurized by long-running, powerful incentives, no matter how many missteps economic or otherwise. It would take a massive and sustained counter-force to prevent a generally upwards AI progression.


Did Webster also redefine the term in service of other goals?

1. the ability to learn or understand or to deal with new or trying situations

https://www.merriam-webster.com/dictionary/intelligence


This also reveals a failure mode in conversations that might go as follows. You point to some version of Webster’s dictionary, but I point to Stuart Russell (an expert in AI). If this is all we do, it is nothing more than an appeal to authority and we don’t get far.


This misunderstands the stated purpose of a dictionary: to catalog word usage — not to define an ontology that other must follow. Usage precedes cataloging.


Regarding the phrase statistical parrot, I would claim that statistical parrotism is an ideology. As with any ideology, what we see is a speciation event. The overpopulation of SEO parrots has driven out a minority of parrots who now respecialize in information dissemination rather than information pollution, leaving their former search-engine ecological niche and settling in a new one that allows them to operate at a higher level of density, compression and complexity. Thus it's a major step in evolution, but it would be a misunderstanding to claim that evolution is the emergence of intelligence.


The overpopulation of AI BS, prophet previsions, pseudo philosopher/anthropologist and so on, this site has been tampered with is astonishing


LLMs ARE stochastic parrots, throw whatever chatgpt slop answer but facts are facts


> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

https://news.ycombinator.com/newsguidelines.html


Even then, facts remains facts ;)


Drawing blueprints is more enjoyable than putting up drywall.


The code is the blueprint.

“The final goal of any engineering activity is some type of documentation. When a design effort is complete, the design documentation is turned over to the manufacturing team. This is a completely different group with completely different skills from the design team. If the design documents truly represent a complete design, the manufacturing team can proceed to build the product. In fact, they can proceed to build lots of the product, all without any further intervention of the designers. After reviewing the software development life cycle as I understood it, I concluded that the only software documentation that actually seems to satisfy the criteria of an engineering design is the source code listings.” - Jack Reeves


depends. if i am converting it to then use it in my project, i don't care who writes it, as long as it works.


Yes there have been multiple papers showing information extraction from embedding vectors if you know the model used. SHA by design maps similar strings pseud-randomly. Embeddings by design map similar strings similarly.


Books are also technology that allow you to answer complex questions without recording the question.


Not necessarily though, it depends on where you got the book from (Amazon, the library?), and what your question is


In general, libraries actually do go out of their way to minimize the ways circulation history can be used against card-holders.

This isn't airtight, but it'a a point of principle for most libraries and librarians and they've gone to the mat over this. https://www.newtactics.org/tactics/protecting-right-privacy-...


This was a surprisingly big thing back in the early 2000s with The War Against Terror. I think that it was mostly for reasons of 'chilling effect', but the media made everyone aware that the Department of Homeland Security were paying attention to what books people took out of the library.

What was curious about this was that, at the time, there were few dangerous books in libraries. Catcher in the Rye and 1984 was about it. You wouldn't find a large print copy of Che Guevara's Guerrilla Warfare, for instance.

I disagree about how libraries minimise the risk of anyone knowing who is reading what. On the web where so much is tracked by low intelligence marketing people, there is more data than anything that anyone can deal with. In effect, nobody is able to follow you that easily, only machines, with data that humans can't make sense of.

Meanwhile, libraries have had really good IT systems for decades, with everything tracked in a meaningful form with easy lookups. These systems are state owned, therefore it is no problem for a three letter agency to get the information they want from a library.


Libraries don't tend to have consolidated, centralized IT. As a result, TLAs have to actually make subpoenas to the databanks maintained by individual, regional library groups, and The ALA offers guidelines on how to respond to those (https://www.ala.org/advocacy/privacy/lawenforcement/guidelin...).

This, of course, doesn't mean your information is irretrievable by TLAs. But the premise of "tap every library to bypass the legal protections against data harvesting" is much trickier when applied to libraries than when applied to, say, Google. They also aren't meaningfully "state-owned" any more than the local Elk's Club is state-owned; the vast majority of libraries are, at most, a county organ, and it is the particular and peculiar style of governance in the United States that when the Feds come knocking on a county's door, they can also tell them to come back with a warrant. That's if the library is actually government-affiliated at all; many are in fact private organizations that were created by wealthy donors at some point in the past (New York Public Library and the Carnegie Library System are two such examples).

Many libraries also purposefully discard circulation data so as to minimize the surface area of what can be subpoena'd. New York Public Library for example, as a matter of policy, purges the circulation data tied to a person's account soon after each loaned item is returned (https://www.nypl.org/help/about-nypl/legal-notices/privacy-p...).


Are you seriously thinking those books are dangerous or did your words exceed your thoughts?


Have you seen the list of books fascists want to ban? I think GP's point was exactly to emphasize that when we're talking about "dangerous books", we're talking about books that indicate you might not be a toe-lining member of The Party. We're talking about any book that any powerful person decides is some sort of threat, even if it's merely a threat to their ego.


Not dangerous at all! An analogy would be comparing a pea-shooter to an automatic rifle, or a thimble full of shandy when compared to a gallon of vodka. There is not a dangerous word in my local library!



So it is indisputable fact that he was arrested for misdemeanor domestic violence, and that the police report indicated a "minor injury."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: