Hacker Newsnew | past | comments | ask | show | jobs | submit | examplary_cable's commentslogin

I'm currently using fly.io and I have no complains so far(most of the problems I had were my fault like a DNS/IP/Certification problem).


So they're saving them from a fate worse than prison ... being a java developer.

I had to say it sorry (҂◡_◡)


No but, seriously


Awesome list!

I love that the cognitive overhead for web apps now is so great many people are "returning to monke" and coding in simple HTML/CSS/JS haha.

Question: Shouldn't the items be in the row so I can see all of them at once? Why are the items in the columns? I'll open a issue (:


None of these are simple HTML/CSS/JS.


It's part of the new modern WebDev semantical lexicon:

!React = HTML/CSS/JS

¯\_(ツ)_/¯


Htmx kinda is. I don't really know alpine but the GitHub docs remind me a bit of jQuery plugins.


>Shouldn't the items be in the row

Maybe! I was messing with different UI approaches to relay this data and this made sense to me, but I'll see if switching to rows is more clear. Thanks for the idea!


The column layout is basically impossible for me to meaningfully navigate on my laptop -- the scrollbar and descriptions don't both fit on the screen (appreciate the repeated titles at the bottom, but ultimately they don't say much)


> The column layout is basically impossible for me to meaningfully navigate

Same. Somebody said it was done this way because of mobile. I'm not sure.

It's just weird to see "items" horizontally and properties vertically. I'm usually familiar with:

prop | prop | prop

item | value| value

item2| value| value


Don't you have horizontal scrolling on your laptop? Edge of the trackpad, or 2-3 fingers?


On mobile, I actually think it's pretty good like this. The names of the fields should stay visible when scrolling, though. Maybe this could be made reactive?


I have made a bash script(using rofi) to use chatGPT if anyone is interested.

https://github.com/ilse-langnar/bashGPT


Thanks


I'm currently working with Alpine.js in order to try to build "Universal Components" or "Hypermedia Components". Where you just hit a URL and get your component like /components/infinite-canvas or /components/svg-2-base64 or /components/lib-somelib.

[1] ilse.ink/components(It's not ready yet)


Sounds interesting.

ilse.ink/components didn't work. Could you post a link to it?


I'm still working in it, but that's going to be the link.

Trying to make universal components is hard, like, I wanted to make all components available trough a "filter" where you could re-use React.js, Vue.js, Svelt, Solid components with each other. When you think about it, components are just I/O with maybe some libraries. I'm thinking this is field is right for some standardization.


Do you know about shoelace?


Interesting Thanks!


You can ultra fine tune those models ... look at vicune 13B, if you know how to prompt it well, you can get it to work as """"well"""" as ChatGPT. Running on local hardware .... I just got vicune 13b on gradio[1] to act as japanese kanji personal trainer, and I've only used a simple prompt: "I want you to act as a Japanese Kanji quiz machine. Each time I ask you for the next question, you are to provide one random Japanese kanji from JLPT N5 kanji list and ask for its meaning. You will generate four options, one correct, three wrong. The options will be labeled from A to D. I will reply to you with one letter, corresponding to one of these labels. You will evaluate my each answer based on your last question and tell me if I chose the right option. If I chose the right label, you will congratulate me. Otherwise you will tell me the right answer. Then you will ask me the next question. Avoid simple kanjis, let's go."

[1] https://chat.lmsys.org/


Sure, a 13B model can be fine-tuned to be pretty decent, which is quite remarkable compared to GPT3's 175B paramters. But a 3B model has 1/4th as many parameters as Vicune-13B, or about twice as many as GPT2. Can you really fine-tune that to do anything useful that wouldn't be better handled by a more specialized open-source model?


How can someone get into using these models? How does ‘tuning’ work? How might I go about using these models for doing things like say summarizing news articles or video transcriptions? When someone tunes a model for a task, what exactly are they doing and how does this ‘change’ the model?


(I'm not an expert)

> How can someone get into using these models

You can use gradio(online) or download(git will not download, it's too big, do it manually) the weights at https://huggingface.co/lmsys/vicuna-13b-delta-v1.1/tree/main and then load the model in pytourch and try inference(text generation). But you'll need either a lot of RAM(16GB,32GB+) or VRAM(Card).

> How might I go about using these models for doing things like say summarizing news articles or video transcriptions Again, you might try online or setup a python/bash/powershell script to load the model for you so you can use it. If you can pay I would recommend runpod for the shared GPUs.

> When someone tunes a model for a task, what exactly are they doing and how does this ‘change’ the model? From my view ... not much ... "fine-tuning" means training(tuning) on a specific dataset(fine, as in fine-grained). As I believe(I'm not sure) they just run more epochs on the model with the new data you have provided it until they reach a good loss(the model works), that's why quality data is important.

You might try https://github.com/oobabooga/text-generation-webui they have a pretty easy setup config. Again, you'll need a lot of RAM and a good CPU for inference on CPU or a GPU.

https://huggingface.co/lmsys/vicuna-13b-delta-v1.1/tree/main


A newer but much better system actually reduces the model size while reducing the functionality of the system - similar to training a NN for a very specific task (as was typical several years ago), but now it can happen with far less data. https://arxiv.org/pdf/2305.02301.pdf This paper is quite fantastic, and will likely shape up to be a quite important glue task for LLM models to generate.


While I recognize that this only one example of what you can do, you can just ask chatgpt to program you a traditional program that does something like this and not have to run a (pretty big/power-intensive/slow on most hardware) 3B/7B parameter model for simple tasks like these.

Yeah it wouldn't be as flexible as a LLM (for example synonyms won't work), but I doubt that for this particular task it'll be that big of problem, and you can ask it to tweak the program in various ways (for example introducing crude spaced-repetition) making it arguably better than the AI solution which takes sometime to prompt engineer and will never be "perfect".

I don't really know how much better fine-tuning makes these models, so I can't think of anything that they can actually be used for where they aren't worse than traditional programs, maybe as an AI in games? for example making them role-play as a historical figure in Civilization 6.


My example here was silly and I admit. But the point was that this simple task cab become more "nuanced"(Aside from ChatRWVK-raven, no other model quite "works" like Vicuna or "tuned LLama"), it can, given the correct prompt act as someone in a fictional work which might help you learn the language better by increase conversational time(most important metric, I'm talking comprehensible input here) by the virtue of being more enjoyable.

Overall I like the progress: LLama releases -> LLama fine turned on larger models gets similar performance to ChatGPT on lower parameters(more efficient) -> People can replicate LLama's model without anything special, effectively making LLMs a "Commodity" -> You are Here.


GPT-GB


Sure, but I think when compared 1-to-1 some people are generally perceived to be better writs ... I'm not sure if GPT-4 would be significantly better than GPT-3 ... might depend on prompt and other issues.


Would you rather have it be a plain text website? No attack, just wondering.


I don’t think 25 fonts has ever made something better. I’m sure there is a number greater than 1 but less than 25 where we can read happily.


25 fonts has made a font library better. :)


That would be great, yeah. In particular, the best type of site is one that plays nice with the user’s default font. There’s no accounting for taste after all, so just let people pick their own style.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: