Hacker Newsnew | past | comments | ask | show | jobs | submit | kvh's commentslogin

Author here, I’ve updated the post. The first draft of this app and blog post took me two hours, but I kept coming back with new ideas and tweaks throughout the week. By the end, I’d certainly spent more than two hours (more like 8?), so you’re right, I just failed to update the post. The main point stands — it’s surprisingly good for the amount of effort put in (although unclear how much more juice you could get out of gpt with more effort. Clear diminishing returns)


Couldn’t you have just got ChatGPT to write the post?


no one wants to automate themselves out of a job, only other people.


I totally automate my roles on projects all the time so I can move on to more interesting things. I guess you mean that no one wants to be fired, but I don't see how that can result from automating one's work.

Also, I HATE doing repetitive things. Some people seem to like it though. To each their own, I guess. Reminds me of https://youtu.be/wNVLOuQNgpo


We all believe our job is so challenging and has such special requirements that it _can't_ be automated. It requires someone with the kind of experience learned with wisdom over a long time. Blah blah blah.


Except for the ones of us who keep automating our jobs so that we can spend our effort on more challenging tasks.


Not all of us.


I am trying to automate my job away, but i'm not succeeding.


Thats actually my strategy;

This makes it so that 1. my quality overall becomes better and my bosses always liked that (doing things per hand are more error prone, not on time etc.)

2. I can go on holiday knowing my company doesn't need me desperate

3. I can spend the free time of actually innovating and bringing more value to the company/product

The problem is not automating yourself out of a job but not being able to leverage the new gained capacity.


Despite productivity generally improving over the last few decades, wage compensation has not.

I'm concerned this will continue as a trend with any productivity improvements from these models.


My way helped me succeed. I took my skills and my achievements (which i made in my R&D Time) to another company and got more money and than i did it again and got more money again.


Right. Presented with efficiency gains, firms tend to increase profit, not wages. One way to change that is to give workers more bargaining power through market shifts or unionization.


All the productivity gains are first transferred to the consumer(because of market dynamics) and then(by the market winners) to shareholders. The workers' wage market is not related to productivity but how the company is internally organized is linked to productivity.


gold. very deep insight into the human nature.

;)


It doesn't seem like you've really replaced anyone with this. You spent 8 hours doing the work that you could have paid an SQL analyst to do in much less.

Unless you're saying that your time is worth less than you'd pay the analyst?


I think the idea is that once built it would be a service that could parse a question, then automatically develop and run any query in response.

Sounds cool until it produces the wrong results.. then you'll need to hire an analyst to check every query just in case.


Put the requests in a queue. Have the bot generate the response. Then forward the response to a human analyst to double-check. A human can surely double-check a response much faster than they can produce one from scratch.

In many professions, it is common to have junior staff members do the grunt work, and then the more senior staff just review their work and either sign off on it, correct it, or send it back to be redone. You could use the same pattern here, replacing the junior staff with an AI, but keeping the senior one.


As if the analyst doesn't get the results wrong! For 1/50 of the price, maybe a few more errors are acceptable, even.


Which errors are you okay with?


Yeah and whose responsibility is it when not catched in time and there this consequences / damage ?


The consequences would be accounted for up front and paid out of the savings from using GPT.


Which price tag are you willing to put on a loved one’s life ? Some consequences of fully automated systems can go deep into human life cost.


The ones for which I would refer the question to GPT. We are still in control of which questions go to GPT/the intern analyst (less critical ones, where a fraction erroneous are okay) and which go to the resident expert analyst.


Also it could possibly remove the (dreaded) on call aspect of it.

I think a lot of business owners would be relatively happy with automated instant answers, or get carefully considered answers in a week.


This is a good point. If the users know the difference the costs and benefits between using GPT and not using it then it certainly has value if those users are also willing to accept that not every answer needs to be 100% accurate.

In my experience business people often have a 'nose' for the right number and will bluff it out if the numbers are wrong and they're challenged.

Blue sky things or stuff you're putting in the annual report should be left to hoomans IMHO.


If there are extensive test cases with static dataset, this may help with query modifications (optimize query, fine-tune, etc.) Of course, this may not feasible for new queries as you can't have test script until the query is ready.


They built a bot which can answer any number of questions, each of which would have needed some analyst time. Given that the analyst rotation was an entire day once every N weeks, and the bot took 1 day to make, this is going to pay for itself after 1 week.

This all assumes that the bot doesn't need tweaking for every answer — i.e. it gets at least some answers right without needing modifications to the bot — which appears to be the case based on the examples in the post.


But, generally and unless there is a glaringly wrong result, only an analyst is going to know if the bot is right or not... what exactly does that gain you?


Maybe it's not a position where it is critical that all answers are 100 % accurate. Maybe getting it right every once in a while is enough to pay for the GPT compute time, but not really for analyst time.


Seems like the issue would be you'd generally get results that 'look' right but would never know if they were actually right without going through and... analysing them


I'm saying there are applications where you don't have to know! As long as the fraction incorrect is less than 50 % and you have 2:1 odds on the consequences you don't have to know which 50 % are incorrect.


Really? Then why pay for the thing in the first place. Why keep the data and make the queries if the results don't actually matter? I'm impressed that you have the ability to envision such a possibility, perhaps you can use that ability to come up with something reasonably likely as opposed to "conceptually possible".


I'm trying to think of easy examples. You're right that none obvious come to mind. I'm sure there is a sweetspot where we can make more money from cheap-but-sometimes-wrong GPT queries than paying for an analyst to be more definitively correct, but I'm tired and a bit fuzzy on the exact parameters.

I'll continue to think about it and write something up!


Hard to imagine a business model where you sell that data that is only right 60% of the time. Maybe in a world where the other best data is even less reliable.


If you're OK with garbage data you don't need ChatGPT - you can probably make up plausible data on your own. Unless you're building some lorem ipsum stuff.


I might not be okay with only garbage data, but data that are correct 60 % of the time may be good enough for some use cases, when it can be had for 1/50 of the price.


Then posit such a scenario as opposed to just the numbers...


It gets you a really sophisticated 'auto-complete' feature


Not really. I'd guess that most people can tell if auto-complete is providing the answer they "wanted".


Can we replace a webmaster with 26 chatgpt prompts?


I reckon we can replace a shill with less


Yes, great point, we share that concern. All of our components (patterns/openai-completion@v4) are open-source and can be downloaded and "dehydrated" into your Patterns app. They all use the same public API available to all apps.

We're working towards a fully open-source execution engine for Patterns -- we want people to invest with full confidence in a long-term ecosystem. For us, sequencing meant dialing in the end-to-end UX and then taking those learnings to build the best framework and ecosystem with a strong foundation. Stay tuned!

Thank you for the kind words and congrats on the great work on Orchest!


The marketplace is an open ecosystem, yes! Anyone can build their own components and apps and submit them. More details here https://www.patterns.app/docs/marketplace-faq/, and guide for building your own: https://www.patterns.app/docs/dev/building-components. It's early days but our goal is coverage of all data sources and sinks, the ontology layer of common transformations and ETL logic, and AI / ML models.


Those are great tools, but built for a different era. We've built Patterns with the goal of fostering an open ecosystem of components and solutions that interface with modern cloud infrastructure and the rest of the modern data stack, so folks can build on top of other's work. As more and more data lives in the cloud, in standard saas, more and more businesses are solving the same data problems over and over. We hope to fix that!


So more of a development platform than an end user tool?


Good concern. All Patterns apps are fully defined by code that you can download. We're building our open source execution engine, once that lands you'll be able to self host forever if desired


Agree, debugging is a critical user experience! In Patterns, you'll see the full stack trace and all logs when you execute Python or SQL.


The article isn't saying what people think it's saying, but tether fud makes good clickbait I guess. Tether has indeed misrepresented its balance sheet at times, but the reality is it's a highly over-capitalized bank -- whereas most banks have liquidity ratios of ~10% (less than that pre-2008) no one is questioning tether is >50%.

A common misconception is that banks use "fractional reserve" lending, in reality private banks create money out of thin air when making loans, constrained only by regulated capitalization requirements (and the obligation to take the write-off on their own balance sheet should the loan default) [1].

Another common misconception is that unregulated banks lead to financial instability and panic. The theoretical and historical evidence for this is pretty weak [2] -- people are much more vigilant with their money when banks are unregulated, and much more aware of the inherent risks of financial systems.

(If all of our regulations worked so well, why are our financial crises worse than ever? cf 2008)

[1] https://www.bankofengland.co.uk/knowledgebank/how-is-money-c... [2] https://www.jstor.org/stable/1814673


> Tether has indeed misrepresented its balance sheet at times, but the reality is it's a highly over-capitalized bank -- whereas most banks have liquidity ratios of ~10% (less than that pre-2008) no one is questioning tether is >50%.

Tether has an capital ratio of about 0.36%. Banks have a minimum capital ratio of about 3% (both of these are looking only at cash/cash-equivalent, not full risk-adjusted capital ratio).

(Cite: https://www.bloomberg.com/opinion/articles/2021-06-16/don-t-...).


I like Matt, but that's a misleading comparison. You can't compare banks with non-banks, since, again, banks have the special regulated right to _create money out of thin air_ and put it on their balance sheet (or if you prefer the fractional reserve metaphor, they can double count your deposit -- lending it out while pretending they still have it for you).

Here's a way to reality check the difference: if 90% of Tether holders redeem their deposits tomorrow, 100% of them will get their money back and Tether Ltd will remain 100% solvent and liquid. If 90% of JPMC depositors redeem tomorrow only 10% of them will get their money back and JPMC will be insolvent.

Where would you rather have your money?


> Here's a way to reality check the difference: if 90% of Tether holders redeem their deposits tomorrow, 100% of them will get their money back and Tether Ltd will remain 100% solvent and liquid.

That's not true. If you read Tether's own description of its reserves, most of it is not in cash. The largest single component is commercial paper; if Tether needed to redeem every single depositor, it would have to fire-sale something like $20 billion of commercial paper. It's also not clear how good quality that commercial paper actually is, and whether its value is what Tether claims it to be--given the shenanigans they've done in the past, and the lack of anyone else in the financial system apparently transacting commercial paper with Tether, I would speculate that a good portion of that is basically Tether loans to cryptocurrency companies that are hand-waved to be commercial paper because Tether is unregulated and isn't required to follow regulated accounting rules to break down its reserves.

It's rather specious to claim that we can't analyze Tether under the spectrum of bank regulation because banks can create money out of thin air when creating tether out of thin air is precisely what Tether is accused of doing.

> Where would you rather have your money?

I'd rather have my money in the institution that is required to spend reams of paper proving that it's solvent than the one whose attestation amounts to "we're solvent, we pinky swear" and has refused to provide any more details on the basis of "crypto is too complicated for anybody to audit."


> If all of our regulations worked so well, why are our financial crises worse than ever? cf 2008

2008 did not compare to the Great Depression. I think we’re still (as a species) learning how to regulate banking well, but it does feel like we’ve learned some things.


SEEKING FREELANCER | SF | Remote or local

Looking for front-end dev (angular / typescript) to accelerate our app development. We are silicon valley veterans (Google, Square) building a data pipeline platform based on functional reactive components. Initial medium-term project to start, open to remote or local in SF.

Email me at kenvanharen@gmail.com


Hi HN, I've been doing data science for 10 years in silicon valley, snapflow is my attempt to bring the best practices of software engineering to the world of data. Concepts like modularity and reusability, pure functions, testability, gradual typing, and immutability.

The goal is a framework that makes building industrial grade data pipelines fun and fast!

What gets me most excited is the ability to share and re-use data fetchers, transformations, analysis, and models -- the foundation for a collaborative data ecosystem.

Thanks for checking it out and feel free to reach out with thoughts! kenvanharen@gmail.com


Impressive the abstraction NNs can achieve from just character prediction. Do the other systems they compare to also use 81M Amazon reviews for training? Seems disingenuous to claim "state-of-the-art" and "less data" if they haven't.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: