Hacker Newsnew | past | comments | ask | show | jobs | submit | disconnected's commentslogin

The "free press" doesn't have a "get out of jail free" card of being able to claim that they are just a "platform" and be able to dodge all responsibility for what they print like Facebook does.

If the press writes something objectively false about someone, they are starting at a defamation lawsuit. If they write something that gets someone killed, they will be staring at possible criminal liability. If they print, I dunno, a picture of a child having sex, there'll be hellfire and brimstone (both legal and social).

And no, saying "ha ha, it was just an opinion column" doesn't save them.

"Press" has perks but also has responsibilities. The free press is "ok" with these rules because they've had to follow them for decades. What they want is a level playing field.


Publishing a story by a reporter, even best one with all the Pulitzer’s isn’t at all comparable to statements of president elected by tens millions of votes, who controls the most powerful military force in the world.

The same press will report on those racist tweets, run stories whole week where they dissect that, opinions, angry letters, etc. Should press stop running stories about racist behavior of the president? How is running story about what president said, different in terms of impact on spreading those views? Would Trump be where he’s right now if press wouldn’t pay attention to his crazy tweets before and fuel his popularity?


> (To be fair, knowing ffmpeg exists doesn't mean I'd be able to easily write a video player with it without a lot of research. For that reason I find this tutorial is still quite interesting and valuable.)

The tutorial is still 1000 lines of C, involving SDL, ffmpeg and threads.

Yikes :)

Last time I did something like this (a UI that among other things, played a live stream from a cheapo IP camera), I used python and the libvlc python bindings. Creating a bare bones media player with it was trivial and it worked very well for what I needed. The only complaint I had at the time was that the documentation was absolutely terrible. A cursory look today reveals that they seem to have improved it, so yay?

And yes, the end result in my case came out MUCH smaller than 1000 lines (but lines of code is a shitty metric anyway).

If you "just" need a media player, IMHO, libvlc is not an awful option (in Python, at least - I have no experience with other bindings): https://wiki.videolan.org/LibVLC/


I don't know how long it is "supposed" to take, but you can compile a Linux kernel (27.8M lines of code, though a good chunk of it probably isn't going to be compiled anyway because you don't need it, and another good chunk is architecture specific) in under 10 minutes on relatively modest (but modern) hardware.

On the other hand, something like Chromium (25M lines of code) will take about 8 hours, and bring your machine to its knees as it consumes ALL available resources (granted, last I did this I only had 8GB of RAM, and I was running my desktop at the time... including Chromium). I don't remember exactly how long Firefox takes to build, but I remember it was significantly less time (maybe 3 hours?).

So... it depends? On a lot of things?

(btw, LoC numbers were pulled from the first legitimate looking result I could find on a quick search... take with a grain of salt... also, compilation times are a rough approximation based on my observations... that it with a truckload of salt)


Linking can consume huge amount of memory, especially for C++ code. For a large project 8GB might be very low. On our codebase we saw huge differences between 16GB machines and 24GB machines as the 16GB could not run some linking steps in parallel without swapping.

Does Chromium build use LTO (I'm pretty sure FF does)? That's also a huge resource sink a doesn't parallelize as well (a lot of the optimization will be delayed to linking)


Last time I checked (admittedly a few years ago), a full build of Firefox on my beefy laptop was ~2h while working on something else and Chromium was 10h+ bringing the system to its knees.


You don't have to be a lawyer to understand that subtraction and multiplication are two completely different operations.


Are you sure?

https://legalbeagle.com/8608294-difference-between-larceny-t...

> In many states, "theft" is an umbrella term that includes all different kinds of criminal taking. This is the case in New York. Under the New York Codes, theft can be any type of taking, like identity theft, theft of intellectual property, theft of services and theft of personal property


If you're a mathematician and not a lawyer you might think those are different operations. But lawyers, judges, and juries have a unique capacity to argue that you're guilty of subtraction even if you only multiplied.

To a lawyer, bits have color: https://ansuz.sooke.bc.ca/entry/23.


You can totally find a mathematician to convince you that those are the same operation! Probably easier than the lawyer, even.


The "web" is built around "human readable" technologies. Even actual implementation details that the user doesn't care about - like the application layer protocol (HTTP) and the source code for pages (HTML, CSS) - is human readable.

The "point" of the web was to serve humans, not machines. If we wanted to serve machines, we'd just throw binary blobs around, which would be orders of magnitude more efficient.

That said, I still have a bunch of "ancient" tech magazines that had directories of URLs for (then) popular websites, grouped by category. That's how we found things then.

People forget that there was a world before Google.


Software certification matters in certain industries.

In automotive and airspace, you will find that being certified opens a bunch of doors for you.

From a company's perspective, these certification don't PROVE that you know what you are doing, but at least they prove that you aren't completely in the dark with regards to all the annoying processes that exist in the industry (stuff like MISRA-C, Automotive SPICE and so on). Or at the very least, you aren't lying when you say that you've TOTALLY heard of them, honest (un-shockingly, people lie on their resumes, even about things that can be easily checked).

This might not get you hired, but it will make them consider you.


I agree. To add another industry, with government and/or defense contractors, certifications matter immensely - sometimes more than actual work experience.


this is unfortunately true. I meet one too many people who had the certifications for a job but not the skills when I worked as a government scientist. Some people can pass any test you throw at them without retaining or learning a thing.


The decline of Firefox is largely due to Mozilla's incompetence.

If you look at their own statistics [1], you will notice that (worldwide) usage started declining after the introduction of Quantum (November 2017), and dropped substantially after the April 2019 addons outage.

This would suggest, IMHO, that Firefox loses market share when beloved features - customization, addons - stop working or got worse.

This is unsurprising to anyone who has been using Firefox for a long time: the primary differentiation factor of Firefox has always been customization. Messing with that would always be a risky proposition. Nuking the whole ecosystem entirely and starting over in the space of a few of months was simply idiotic. Disabling everyone's addon's because of an admin screw up was just the icing on the fucking cake.

[1] https://data.firefox.com/dashboard/user-activity


Keeping XUL extensions would have meant continuing to sacrifice performance, security and stability. That could never have worked long-term.


> the market has finally caught up to the fact that Tesla is years ahead of their competitors.

Incorrect. The stock value is being pushed up by:

1. Unexpected quarterly profits in October;

2. Tesla planning to open a new factory in China and securing a 1.4 Billion dollar loan to do so;

3. Apparent strong interest in the Cybertruck and the impending announcement of the Model Y;

4. USA and China are easing the trade war, which is pushing the whole stock market up (there have been "records" left and right in the past few days).

But the big one is number 2. Tesla moving into China means that they can tap into the humongous Chinese market.


Well, TSLA is a very sentiment-heavy stock, so it's hard or impossible to say what exactly caused this recent run.

For example, they've been guiding Q3 and Q4 profitablity since ~Q2 or earlier. They started building the Shanghai factory in Q1, the Model Y was announced in March, the Truck's been talked up since forever, etc.

What's really changed is that the Taycan is no longer a mystery, but a real car with a real price tag and real range. The ID.3 is almost-real and struggling with software issues. The E-Tron, new Leaf, new Bolt, etc have real sales figures. When these were unknowns, you could be reasonably skeptical of Tesla's claims, but that's increasingly untenable.

Or, as the great Silicon Valley philosothoughtleader Russ Hanneman said: https://www.youtube.com/watch?v=BzAdXyPYKQo


Everything you just typed can be explained by the phrase "Tesla is years ahead of their competitors", so not sure why you're so solid on that "Incorrect" statement.


5. Tesla is one of the most manipulated large cap stocks on Wall Street. You're observing one of the many rounds of a pump and dump and 6 months from now it'll be at just above 200 again as something "bad" is discovered by the "free" press just in time for the big players to cash out again. Anyone who buys at 420 is a sucker.


Stock is up mostly because of short-covering. This is what a textbook case of a short-squeeze looks like.


Tesla already sells cars in the Chinese market, especially since there is really no other EV luxury alternative and Beijing ICE plate lottery allocations are lower than the EV ones. The factory means Model3s no longer need to be imported, so should be cheaper.


>impending announcement of the Model Y

The Model Y was announced in mid-March, to generally negative reviews/market reaction.


> Thanks for the warning, are there any good tools that integrate with pull request checks that can be self hosted?

Recently at work we've done an analysis of a of CI/CD tools. Many of them are self-hosted, free (and open source) and support a variety of workflows - including pull request related checks (either by polling of via webhooks). A cursory search will yield you a lot to play with, so... go ahead and do that.

That said, if you don't want to think about it too much, you can't go wrong with Jenkins.

Some people will suggest GitLab. I'd steer clear of GitLab, though, because of their operational incompetence [1] and their funny ideas about mandatory corporate espionage [2] - I mean, telemetry - which they only backed away from because people yelled at them. That second one was enough to disqualify them in our analysis (the first one just cemented the idea).

[1] https://about.gitlab.com/blog/2017/02/10/postmortem-of-datab...

[2] https://www.theregister.co.uk/2019/10/30/gitlab_backtracks_o...


Gitlab CI is great if you need only one pipeline to execute per branch.

As soon as you need multiple pipelines per branch it doesn’t work well.

Most small projects need only one pipeline and are well suited. Other things like terraform needing multiple pipelines for multiple environments are better suited for a CI platform that handles multiple pipelines.


It seems like their gitlab.com uptime isn't that relevant when the question was about self-hosted CI. And while I don't like the idea of telemetry in self-hosted products either, calling it "corporate espionage" is hyperbole and I don't know how much more one could expect from their response other than apologizing and backtracking.


What's wrong with apt is that you can't get what YOU want: you can only get whatever the Debian people decided makes sense for one specific version of the OS - which usually means a 2 years out of date version of the library.

If you need a library of a version other than the Debian approved version, you are back to manually downloading source tarballs from sourceforge and figuring out their arcane build system.

Also, apt doesn't let you have two versions of the same library installed. For SOME things, they have more than one version package, but in general, you can't have STABLE_VERSION installed for your day-to-day OS usage and DEVELOPMENT_VERSION installed for your development needs. You only get one or the other (this isn't exclusive of apt - all Linux package managers do this, AFAIK).

Anyway, contrast that situation with pip (python), where you can just grab whatever version you want, have the package manager solve the dependencies for you, and slap it all into a virtualenv where it won't interfere with your system-level libraries. Heck, you can even grab versions straight from git, and it will (try to) solve the dependencies (if any).

It's a WHOLE different level of convenience.


First of all, the most popular apt-based distro is not Debian, it is Ubuntu. Second, you can set your apt sources to whatever you want. Plenty of people publish apt repositories other than the Ubuntu project. Confluent for example makes all their software (Kafka etc) available through their own apt repositories.


Many package managers support an “install root,” and there’s always Docker containers.

Those obviously don’t solve many of the other issues that a real Package manager would, of course.


I agree - but the counter example I'd give is portage which gives you all that and more.


dpkg -i deb_file_I_downloaded_off_the_interwebz

phew, that vendor lockin that includes tools to install whatever I want!


sigh

That random file you downloaded off the internet was built under a specific set of assumptions - assumptions that only hold true if you are running the specific OS version they were targeting.

IF you download the .deb file for your specific OS, and IF you manually install all missing dependencies, then it works. Otherwise, you are still screwed.

At least you can extract it (IIRC, it's just a zip file anyway). But that's no different from going to sourceforge or github or whatever and getting the source tarball... and we are back where we started.

By the way, I was not complaining about "vendor lock-in". I was complaining about Debian's package management policies and how they can affect your software development process in practice - to make the case that apt is a crap replacement for a proper language/library/development/whatever oriented package management.


Yes, it turns out that installation of software has the assumption that you're putting it on the right OS. This is the case for Windows, for Mac, and so forth.

I've never had an issue with a deb package, but I'll tell you it's one of the reasons I stick with Ubuntu for home.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: