This isn't really correct. How they decide what an IQ of 160 is, is by get thousands of people together, asking them questions and seeing what percentage of the population get which questions right or wrong.
Questions that are tagged 160 are irrelevant because they norming group is WAYYY to small to say 1 in 30,000 people will get this correct. It's simply not possible to come up with such a question. They questions that get you 160 on IQ score are almost certainly questions that no one in the norming group got correct.
So no, there's zero proof that 1 in 30,000 people have an IQ of 160. There simply haven't been that many IQ test administered to the general population to determine that.
Testing problems can make it hard to tell which specific people in a room of geniuses have 160 IQ, but by definition the smartest fraction of a percent of people in the world have 160 IQ. The only way that group doesn't exist is if that percent doesn't exist, as in IQ hits a brick wall at 140 or 150 and higher is impossible.
My habit is to cancel right away when I start a subscription. You can still use the service for the month you paid for, and it's easy to restart it if you do still find yourself attempting to use it a month later.
Substack disappointed me because you can't do this. They remove content access if you do not have an "active" subscription. You don't have to pay again to "reactivate", but the paid content is locked until you do.
This is obviously intended dark behavior because when I went to fully unsub later in the month they gave me a special dialog and a free month hoping for me to forget again this month.
Very lame.
Oh, you haven't seen darker behavior of many Japan based services:
- Payment cycle closes always on the 1st of the month (which means, if you sign up on the last day of the month, you get 1 day of service for full month of payment. No proration, or anything.)
- If you cancel the service, you'll immediately lose access, doesn't matter if it's 1st day of the month, or very end of the month, it's gone. To reenroll, you often have to pay again.
I had a boss I didn't like a while ago, but one of the things he did that I did like was set a calendar reminder to unsubscribe. Too bad; it would've been easier on me mentally and emotionally if he were completely irredeemable, but I admit to using this trick myself now.
I've been using Privacy.com cards for years to avoid this. Set a limit of 1 month on the card, and if I'm actually using the sub I'll fix/update it when the sub runs out.
I don't think it's cheap to pay for a service, test it out for a month, find that in the beginning you used it everyday, and by the end of the month you forgot it existed. Then decide to not keep paying for it.
Probably for the next proposal I write, I'll pay for it again. It's super useful to take care of all the bullshit things you have to write for science to not plaigerize yourself.
I also tried using it as a dungeon master like that blog post from a couple weeks ago. But gpt4 didn't seem to remember things with regularity enough to actually work. Basically there was an uncanny valley because gpt4 doesn't have a pad of paper to write things down on like any person would have.
This will be great for Kenya and other countries like it.
A really bad problem in these countries is their best in brightest are too busy doing service jobs for Westerners than doing the work in manufacturing and infrastructure that builds their capital base.
Per the comment below some have companies formed around this work. Also, as long as they are spending that income locally they’re still bringing wealth to their country which is a good thing.
Not exactly sure it's great for kenya. I made good pocket money, better than my 8 hour internship solving CS assignments. That was not a career but good pocket money.
I would just sit idle if I were not solving assignments.
FWIW, as a non-pathologist with a pathologist for a father, I can almost pass the pathology boards when taken as a test in isolation. Most of these tests are very easy for professionals in their fields, and are just a Jacksonian barrier to entry. Being allowed to sit for the test is the hard part, not the test itself.
As far as I know, the exception to this is the bar exam, which GPT-4 can also pass, but that exam plays into GPT-4's strengths much more than other professional exams.
What is a Jacksonian barrier to entry? I can't find the phrase "Jacksonian barrier" anywhere else on the internet except in one journal article that talks about barriers against women's participation in the public sphere in Columbia County NY during Andrew Jackson's presidency.
I may have gotten the president wrong (I was 95% sure it's named after Jackson until I Googled it), but the word "Jacksonian" was meant to refer to the addition of bureaucracy to a process to make it cost more to do it, and thus discourage people. I guess I should have said "red tape" instead...
Either it's a really obscure usage of the word or I got the president wrong.
"It's difficult to attribute the addition of bureaucracy or increased costs to a specific U.S. president, as many presidents have overseen the growth of the federal government and its bureaucracy throughout American history. However, it is worth mentioning that Lyndon B. Johnson's administration, during the 1960s, saw a significant expansion of the federal government and the creation of many new agencies and programs as part of his "Great Society" initiative. This expansion led to increased bureaucracy, which some argue made certain processes more expensive and inefficient. But it's important to note that the intentions of these initiatives were to address issues such as poverty, education, and civil rights, rather than to intentionally make processes more costly or discourage people.
Exams are designed to be challenging to humans because most of us don’t have photographic memories or RAM based memory, so passing the test is a good predictor of knowing your stuff, i.e. deep comprehension.
Making GPT sit it is like getting someone with no knowledge but a computer full of past questions and answers and a search button to sit the exam. It has metaphorical written it’s answers on it’s arm.
This is essentially true. I explained it to my friends like this:
It knows a lot of stuff, but it can't do much thinking, so the minute your problem and its solution are far enough off the well-trodden path, its logic falls apart. Likewise, it's not especially good at math. It's great at understanding your question and replying with a good plain-english answer, but it's not actually thinking
That's a disservice to your friends, unless you spend a bunch of time defining thinking first, and even then, it's not clear that it, with what it knows and the computing power it has access to, doesn't "think". It totally does a bunch of problem solving; fails on some, succeeds on others (just like a human that thinks); GPT-4's better than GPT-3. It's quite successful at simple reasoning (eg https://sharegpt.com/c/SCeRkT7 and moderately successful at difficult reasoning (eg getting a solution to the puzzle question about the man, the fox, the chicken, and the grain trying to cross the river. GPT-3 fails if you substitute in different animals, but GPT-4 seems to be able to handle that. GPT-4's passed the bar exam, which has a whole section on logic puzzles (sample test questions from '07: https://www.trainertestprep.com/lsat/blog/sample-lsat-logic-... ).
It's able to define new concepts and new words. It's masters have gone to great lengths to prevent it from writing out particular types of judgements (eg https://sharegpt.com/c/uPztFv1). Hell, it's got a great imagination if you look at all the hallucinations it produces.
All of that sum up to many thinking-adjacent things, if not actual thinking! It all really hinges on your definition of thinking.
exactly. it's almost like say dictionaries are better at spelling bee hence smarter than humans, or that computers can easily beat humans in Tetris and smarter because of that.
That's not a response from someone who wrote the answers on the inside of their elbow before coming to class. That's genuine inductive reasoning at a level you wouldn't get from quite a few real, live human students. GPT4 is using its general knowledge to speculate on the answer to a specific question that has possibly never been asked before, certainly not in those particular words.
It is hard to tell what is really happening. At some level though, it is deep reasoning by humans, turned into intelligent text, and run through a language model. If you fed the model garbage it would spit out garbage. Unlike a human child who tends to know when you are lying to them.
If you fed the model garbage it would spit out garbage.
(Shrug) Exactly the same as with a human child.
Unlike a human child who tends to know when you are lying to them.
LOL. If that were true, it might have saved Fox News $800 million. Nobody would bother lying, either to children or to adults, if it didn't work as well as it does.
Questions that are tagged 160 are irrelevant because they norming group is WAYYY to small to say 1 in 30,000 people will get this correct. It's simply not possible to come up with such a question. They questions that get you 160 on IQ score are almost certainly questions that no one in the norming group got correct.
So no, there's zero proof that 1 in 30,000 people have an IQ of 160. There simply haven't been that many IQ test administered to the general population to determine that.