This might not 'mean much' to the computer, but it means a lot to the human. The computer uses it to communicate with the human and the humans between each other. When I arrange a meeting across timezones I will say CET 16.00 or ET3.00PM and they will understand it faster than saying how much offset we are from UCT.
But to summarize, Copilot is an okay ChatGPT wrapper for single line code completion. Cursor is a full IDE with AI baked in natively, with use-case specific models for different tasks, intelligent look-ahead, multi-line completion, etc.
For me, the main draw is Cursor's repo-wide indexing (presumably a RAG
pass). I asked Copilot to unify two functions, one at the top of a
massive file, and the other at the bottom. Since they were so far
apart, Copilot couldn't see both functions at the same time. Cursor
didn't give me a great answer, but it did give *an* answer.
You can pass in @workspace before the prompt (in copilot chat) and it looks at the full context. It works OK, I could imaging Cursor being more powerful in this!
This is great, similar to the Johnny Cash video! Although that had a bit better execution, users could submit however many they wanted and you could select multiple version of a frame.
Will the voices be dubbed by the community aswell? If not, how does this evade the copyright issue?
Hi everyone,
We have been dealing for a while with the underperformance of Time-Series Machine Learning models (mostly due to regime changes), and haven't found the right library to complete Adaptive Backtesting before the heat-death of the universe.
We ended up writing a library from scratch, that comes with an order of magnitude speed-up, called Fold.
--- This is the launch of our core engine so we would love to get some feedback on Fold (https://github.com/dream-faster/fold)! We’ll be here and happy to answer any questions.
We have been dealing for a while with the underperformance of Time-Series Machine Learning models (mostly due to regime changes), and haven't found the right library to complete Time Series Cross Validation before the heat-death of the universe.
We ended up writing a library from scratch, that comes with an order of magnitude speed-up, called Fold.
---
This is the launch of our core engine so we would love to get some feedback on Fold (https://github.com/dream-faster/fold)! We’ll be here and happy to answer any questions.
We'll be working on the pytorch integration soon!
`Fold`'s scope is time series, but there's nothing stopping you from sending in vector embeddings of any kind, timestamped.
Figuring out how to create those embeddings (that make sense over time) can mean quite a bit of research work, and requires flexibility, so it's probably better done outside of the time series library, with the tools of your choice.
> We'll be working on the pytorch integration soon! `Fold`'s scope is time series, but there's nothing stopping you from sending in vector embeddings of any kind, timestamped.
Awesome. I'll take a look. Thanks!
> Figuring out how to create those embeddings (that make sense over time) can mean quite a bit of research work, and requires flexibility, so it's probably better done outside of the time series library ...
reply