Hacker Newsnew | past | comments | ask | show | jobs | submit | iguana's commentslogin

Brilliant idea and great execution!


thanks for the kind words!


Trivial examples show that this isn't nearly as good as ChatGPT. The headline should be changed.


It took Open.ai 10 years of fine tuning, can’t expect things to work as well in day 1.


I like blog post title.

Introducing Lamini, the LLM Engine for Rapidly Customizing Models

Obviously it still takes a huge amount of work to customize a model to be as good as GPT4 or ChatGPT, that’s exactly why we are building Lamini.

To give developers tools to make it easier.

Hopefully it is clear that it will take more work than 1 day.


What makes you think copy and pasting responses from an LLM is the actual job being posted?


This is an unhelpfully cynical take. The job title has "engineer" in it, so a more charitable interpretation is more serious than "AI monkey".

Using LLMs to solve real problems is not easy. Making sure that you don't introduce regressions while making improvements is difficult, and requires building and evaluating a dataset, and the necessary pipelines. It may also include diversification of LLM providers, and creating the necessary abstractions. A fundamental understanding of how LLMs work, ability to compare different architectural approaches, along with typical data engineering and software development skills would be required.

What if you want to use the LLM for Question/Answer systems that requires working with embeddings? What if you want to find a way to process data locally without sending sensitive data to the LLM provider?

This requires real engineering skills.


> Using LLMs to solve real problems is not easy.

Yes it is.

That is exactly why products like ChatGPT have been taking off as quick as they have.

What you're talking about is not using LLMs as a product but using them as a component within a broader system. And so of course that requires engineering skills.


That's why the word "engineer" is in the title and it isn't "prompt writer" or "ChatGPT user".

It's highly unlikely that anyone taking this effort seriously is copying and pasting from ChatGPT, rather than using the API and building pipelines as part of a broader system.

Instruction-fine-tuned LLMs like ChatGPT require creating, validating, and maintaining prompts. Finding ways to use them safely is also not easy - prompt injection and hallucination are just 2 potential pitfalls - there are many more.

Denigrating this effort as "AI monkey" is myopic at best, but really just comes across as a signal that someone is terrified of being replaced by this new tech. With that attitude, they will be.


Your objection boils down to

>What if you want to [do software engineering]?

Then you're a software engineer. Writing prompts isn't engineering. Building systems is engineering. Just because I use keyboards to program doesn't mean I'm a keyboard engineer, does it?


Writing prompts and engineering together = prompt engineer. The engineering depends on the prompt and the prompt depends on the engineering. Just like an ML engineer, or a QA engineer, or [anything] engineer. How specific the job title gets really depends on hiring criteria and daily job function.

Otherwise, the job title would be "prompt writer".

Your point is what, that existing engineering titles cover this effort? Sure, you can just call all of it software engineering, but sometimes it's useful to be more specific. The LLMs are so powerful now that this new, more specific title makes sense to me, and clearly those using this new title. We'll see how it pans out over the next few years.


The engineering doesn't depend on the prompt. If you can't build without the LLM you're not an engineer.


This is demonstrably false for many use cases. For one broad example, LLMs have shown incredible performance on many NLU and NLP tasks, that are not currently possible using other techniques.


Funny


For those with the new Samsung Galaxy S23 series, they use a Snapdragon X70 modem and so are not impacted.


If you're interested in increasing EVOO consumption for the health benefits based on polyphenols, look into Moroccan olive oil, as it has very high polyphenol levels - as high as 30x regular EVOO.


You may be conflating programs running in a terminal and the terminal itself. We've managed to get this far without the latter.


I am not conflating the two. If the terminal can run programs connected to the internet, then the terminal has internet connectivity. The host system would not be able to tell the difference.

Warp could certainly promise not to include any phone-home functionality in their code, but unless it's open-source and everything is audited, it could easily call the host system's HTTP client and still phone home.


> If the terminal can run programs connected to the internet, then the terminal has internet connectivity.

Is this true? This sounds wrong to me but I don't know the inner workings of terminals. The terminal just executes programs and handles pipes it seems. A terminal can be completely walled from the internet, and when you execute something from it, say, curl, then curl has it's own memory space and access layer outside the terminal, and just has it's stdio wired to the terminal.


> The terminal just executes programs and handles pipes it seems. A terminal can be completely walled from the internet, and when you execute something from it, say, curl, then curl has it's own memory space and access layer outside the terminal, and just has it's stdio wired to the terminal.

As I said in my comment, even if you "wall" the terminal off from the internet, if it can make system calls on behalf of the user, it can still access the internet.

If a terminal has sufficient access to the host system to call `curl https://www.google.com` on behalf of the user, then it can call it without any user input.

There is nothing on the host machine that can authenticate system calls coming from the terminal application as "user-initiated" or not. This is similar to the warning that "you can't trust the client"[1].

1. https://security.stackexchange.com/questions/105389/dont-tru...


You're technically correct here due to some sloppy words, but this isn't the point that everyone here is trying to make. We know our terminals can connect to the internet, we don't want them to do that without being instructed to. If our terminals randomly curl'd websites (as opposed to delivering telemetry to a 3rd party), I'm sure the discussion would be similarly displeased.


And what I'm saying is that there's no way to set a terminal's permissions on the host system such that it can access the internet on behalf of the user but cannot access the internet on behalf of its creators.

This is a human problem, not a software one. Your terminal is as trustworthy as its creators. It cannot be locked down to prevent telemetry and still be a useful terminal. That was my original point and it is still true.

No one should use a for-profit terminal emulator, especially one created by a VC-backed startup, full stop.


Replicant | https://replicant.ai/ | QA Lead, Fullstack, Deep Learning, Data Engineering, and Telephony Engineering positions | San Francisco, CA or REMOTE Replicant is a Conversational AI technology that works out of the box to solve customer problems over the phone. We craft great conversations by combining Machine Learning, Artificial Intelligence, and linguistic conversational design into the fastest, smartest, and most expressive Thinking Machines you’ve ever spoken with.

We're a small team (~20 people) tackling a big industry with many eyes on it, using powerful technology. Our team comes from a diverse background of industry and the arts, and we are distributed across the US and Canada.

Our stack includes TypeScript (browser and node), JavaScript, Postgres, Redis, Python (3.x), and Pytorch Infrastructure is Google Cloud (though we also use AWS and Azure) and most services run in k8s (Kubernetes)

We're hiring:

* QA Lead - own quality engineering, E2E testing, automation, and testing conversations on the phone

* Deep Learning / NLP / Transcription - Transformers, Intent detection

* Data Engineering - model data and build data pipelines for realtime low latency inference

* Telephony / DSP Engineer - SIP integrations, low latency audio processing

In order to support you we offer:

* A remote-friendly culture: Communication is big. Most of us work remotely, full and/or part time.

* Offsites: We come together regularly for some unwinding and face-to-face time.

* Benefits: a great health plan, equity, and 401K.

However, the most significant advantage is that you'll be early enough to shape Replicant's culture and the next era of growth.

Please reach out to: jobs@replicant.ai


Replicant | https://replicant.ai/ | QA Lead, Fullstack, Deep Learning, Data Engineering, and Telephony Engineering positions | San Francisco, CA or REMOTE

Replicant is a Conversational AI technology that works out of the box to solve customer problems over the phone. We craft great conversations by combining Machine Learning, Artificial Intelligence, and linguistic conversational design into the fastest, smartest, and most expressive Thinking Machines you’ve ever spoken with.

We're a small team (~20 people) tackling a big industry with many eyes on it, using powerful technology. Our team comes from a diverse background of industry and the arts, and we are distributed across the US and Canada.

Our stack includes TypeScript (browser and node), JavaScript, Postgres, Redis, Python (3.x), and Pytorch Infrastructure is Google Cloud (though we also use AWS and Azure) and most services run in k8s (Kubernetes)

We're hiring:

* QA Lead - own quality engineering, E2E testing, automation, and testing conversations on the phone

* Deep Learning / NLP / Transcription - Transformers, Intent detection

* Data Engineering - model data and build data pipelines for realtime low latency inference

* Telephony / DSP Engineer - SIP integrations, low latency audio processing

In order to support you we offer:

* A remote-friendly culture: Communication is big. Most of us work remotely, full and/or part time.

* Offsites: We come together regularly for some unwinding and face-to-face time.

* Benefits: a great health plan, equity, and 401K.

However, the most significant advantage is that you'll be early enough to shape Replicant's culture and the next era of growth.

Please reach out to: jobs@replicant.ai


How confident are you that the results will indicate a level of impairment, rather than a particular concentration of THC?

Other substances contained in cannabis have significant synergistic effects with THC; are you going to look for other cannabinoids and the presence of terpenes?


Confident that it will indicate, not necessarily confident that it will correlate (if that makes sense). The impairment window is approximately 2-3 hours post consumption. We can calibrate our sensor to have a threshold of detection that only triggers a positive if you smoked 2-3 hours ago. However, it is not clear that a higher concentration on our device would indicate higher impairment. More research is required to determine that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: