None of that talking about the cost of running such an attack and what models were involved during which phases. Seems like you can use Anthropic now as a proxies bot net
There's some fascinating research relating to this happening at UC Berkeley's Almeida Lab (https://nature.berkeley.edu/almeidalab/) on grapevine diseases, particularly relevant to the wine industry in Northern California.
1. The lab, led by Professor Rodrigo Almeida, is studying economically important grape diseases, focusing on Grapevine leafroll disease and Grapevine red blotch disease [1].
2. They're developing an AI tool for fast and accurate disease identification in vineyards, which could be a game-changer for disease management.
3. Their work combines molecular biology, ecology, and bioinformatics, using advanced techniques like genomics.
4. A recent study led by Kai Blaisdell showed that mealybugs efficiently transmit Grapevine leafroll-associated virus 3 under field conditions, with disease symptoms appearing throughout the plant one year after infection [2].
5. Their focus seems to be more on understanding and managing plant diseases using various molecular and ecological approaches [3].
This research is crucial for the wine industry, especially in regions like Northern California. It's not quite "genomics on the brink of the discovery what was penicillin through crispr", but it's still cutting-edge work that could have significant impacts on grape cultivation and wine production.
Given the recent Supreme Court decision to overturn the Chevron deference doctrine, the responsibility for interpreting legal gaps now firmly rests with the courts, pushing Congress to be more explicit in its legislations. This shift underscores the need for tools that can help both government officials and advocates navigate the complexities of policy development and legislative analysis efficiently.
If you're a government official, advocate, or researcher feeling the impact of this change, consider reaching out to me. I am working on a platform that leverages AI to revolutionize policy research, bill drafting, and legislative analysis. I think there is a lot of toil that can be removed with intelligent policy research, advanced legal analysis, collaborative policy development, and automated legislative drafting. My team and I are designing it to empower teams with tools to facilitate grassroots participation while integrating seamlessly with existing civic platforms.
We're currently looking for early testers to help refine our product. If you're interested in enhancing your productivity and navigating the new legislative landscape with ease, DM me. Your insights could be invaluable in shaping a tool that addresses the real challenges in policy development and advocacy.
This RouteLLM framework sounds really promising, especially for cost optimization. It reminds me of the KNN-router project ([https://github.com/pulzeai-oss/knn-router](https://github.co...), which uses a k-nearest neighbors approach to route queries to the most appropriate models.
What I like about these kinds of solutions is that they address the practical challenges of using multiple LLMs. Rate limits, cost per token, and even just choosing the right model for the job can be a real headache.
KNN-router, for example, lets you define your own logic for routing queries, so you can factor in things like model accuracy, response time, and cost. You can even set up fallback models for when your primary model is unavailable.
It's cool to see these kinds of tools emerging because it shows that people are starting to think seriously about how to build robust, cost-effective LLM pipelines. This is going to be crucial as more and more companies start incorporating LLMs into their products and services.
Cost is a plus but at least what I see is that getting good response time is even bigger. Something like OpenAI Azure instances are inconsistent and it is far too normal to get a 40sec lag with responses with gpt4-o.
That's an interesting prompt I tried on our Pulze.ai platform Spaces and we nailed it with automatically choosing the right model for this type of question gpt-4-turbo: "Yes, you can infuse garlic into olive oil without heating it up, but it requires caution due to the risk of botulism, a potentially fatal illness caused by Clostridium botulinum bacteria. These bacteria can thrive in low-oxygen environments and can produce toxins in food products like garlic-infused oil if not prepared or stored correctly."
I think that is one advantage of not just blindly trusting one model but finding consensus among many top rated models within one interface that allows you to quickly cross-check.
Which Gemini was used is important too btw. - just tried Gemini-1.5-pro and it was working just fine. So I really think the newer versions of LLMs are able to catch this.
That’s an interesting use case, we thought of a composition of expert router with specialized models like code-generation models to handle coding tasks. Currently we encourage Users to experiment with their own data, and we're happy to help. If you're interested in applying this to smaller LLMs and specialized adapters, let's connect!
This just becomes interesting if electricity can be produced from reflected photons by the moon such as at night energy production is possible. Other than that I believe in fusion although the giant fusion reactor does help during the day. Instead of making photovoltaic more efficient they should do this with batteries
I believe we will build a functional net energy gain fusion reactor probably in the next decade if things go well (I’m rooting for SPARC), but we will still need to build an actual power plant (designed for long life, serviceability, improve efficiency based on what was learned before) and that will take a while. And then we need to build lots of them. And they will be very expensive.
Probably fusion power will not be cheaper than renewables inside of 50 years, because fusion power plants will simply be very expensive.
In the next 20 years we need to decarbonize as much as possible. Fusion sadly won’t have much of an impact for that.
But in 30 years when todays new renewables are at the end of their service life, we have an opportunity to replace them with fusion. That said, renewables will be that much cheaper in 30 years. I think for a while fusion will make the most sense for large industrial manufacturing operations that necessarily require large constant amounts of power.
Even if fusion was widely available and affordable you will still want other sources of power for peak demand. Like fission nuclear reactors, they will be good for base power-load. Fusion reactors won't be able to spin up and down based on demand willy nilly.
Given most demand is during the day and early evening solar is a good complement, but the more mixed renewals you have in your grid the better it will tolerate shocks in supply and demand.
> This just becomes interesting if electricity can be produced from reflected photons by the moon such as at night energy production is possible
"referring to thermal energy grid storage (TEGS) consisting of a low-cost, grid-scale energy storage technology that uses TPVs to convert heat to electricity above 2,000 C"
You all speak in miracles here, the use case seems to be converting thermal energy and energy storage. Why the moon, and what does that have to do with regular photovoltaic efficiency?
They, like me, read the title as "Photovoltaic", which are solar cells. And the comment was around that presumably. I was also reading the headline and the first comments entirely confused until I read the article and it elaborate that these are "ThermoPhotoVoltaic" cells, which involves heat and ties in to the article's comments about this being used for energy storage.
All around, confusing. I didn't even know we had such a thing.
I was thinking you could cover the moon in rotating mirrors to redirect the light to solar panels. And if they can be controlled independently, you can tweak them and basically use the surface of the moon as a giant display, who wouldn't want the moon to look like a giant Apple logo?