A maintainer of RubyGems was forcibly removed from the RubyGems GitHub org — which was renamed to Ruby Central — along with every other maintainer. Then access was restored, then revoked again. There was no explanation, no communication, and no understandable reasoning for this.
This wildly transcends "issues with both internal and external communication" or "we're just a bunch of makers who can't be expected to be good at organization or communication" (to highly paraphrase TFA). This is an absolutely disastrous breach of the community's trust.
Given that access was cut, then restored, then cut again, then days, then someone finally says "hey were were going to lose critical funding" makes it seem like a post-facto excuse for a hostile takeover.
And the whole "oh, well, we're bad at comms" makes it sound even worse!
Which is the whole crux of the issue. At no point in any of this did Ruby Central do anything reasonable. The they tried to explain that their unreasonable actions were reasonable, if you only knew the things they knew, which they were for some reason unable to tell people until just now.
> Let's get some kind of committer agreement in place with those folks who need access (the same way many other high profile open source projects have), and remove access from those who don't, while still being fully open to accepting PRs and being open to re-welcoming them as committers if they decide that is how they want to spend their time in the future.
> Here's the challenge. How do you tell someone that has had commit and admin access to critical infrastructure long after that need has expired that you need to revoke that access without upsetting them?
Ruby Central sponsors him to work on the project. They also own the project. Sure it’s not ideal that they’ve apparently come to an impasse of some sort but locking him out is not bonkers.
I've seen some contention around that. RC owns the rubygems infrastructure. But it's not clear that they should own the repos of the open source rubygems or bundler projects that they use. They just seem to have fallen to that organization by way of some admin owner passing through, rather than an official hand off.
Ruby Central as an organization touts that it is responsible for RubyGems. Assuming this narrative is accurate, they needed to get agreements in place with contributors to appease some funding partners.
This shit happens. Especially as an open-source project started by one dude in 2009 turns into critical infrastructure managed by a 501(c)(3) non-profit.
That they failed so fucking spectacularly speaks incredibly poorly of their board.
Folks did this plenty prior to Electron, either by using cross-platform GUI toolkits (GTK and Qt both run on Windows, Java's been doing this forever with Swing and JavaFX, etc), or by writing GUIs for multiple toolkits/OSes that work with the same/similar core application logic.
Electron makes it easier to build cross-platform apps, and certainly cheaper, but it's not like it's the only way to do it.
Regardless of what you think of Teams -- I myself have had nothing but poor experiences over 2.5 years of using it daily -- it's telling that Microsoft has to require folks to use Teams.
I'm sure this is just Microsoft unifying everyone on the same comms platform, but seriously, I don't know anyone who chooses Teams.
I know non-tech folks who chose Outlook 365 because of familiarity and then end up on Teams because it's free, but there's a difference between "I chose an email/identity platform that I know and I guess I'll use its chat app too" and "I evaluated team chat offerings and Teams is our top pick."
Hell, at my most recent company (which was founded on O365 before I arrived) I replaced Teams chat interface with self-hosted Mattermost (Slack's HIPAA-compliant tier is way too expensive for a startup) and it was roundly loved. We did still lean on Teams for its video chat, because most of our non-tech staff know how to schedule and join video meetings, but even then the top complaint I got was from folks on Windows laptops whose Teams plugin for Outlook somehow got corrupted (or something?) and suddenly Outlook's Teams integration was gone.
Just an awful product all around -- said with no offense meant to the team building it.
_Update_: I now notice the text "for the sole purpose of video conferencing" which lines up with my use case, but still -- of all the video apps I put Teams down with Webex as "bottom of the barrel choices" due to the constant performance and functionality issues.
If nothing else, Teams is horrendously inconsistent and one might naïvely think that MS would want to dogfood it better, with good statistics.
This HN article now has nearly 600 comments of people – mostly – griping about Teams. A large number of them are replied to with people saying "Oh, it's never done that for me!" or, alternatively, "Teams never works fully, but at least X works" being replied to by "X has literally never worked for me". They're all right. I've had vast numbers of random errors – like the application bars just disappearing, or a thousand and one "Sorry, Something Went Wrong™!" errors, but fundamentally, it's indescribably awful.
Periodically I'm asked to give feedback about how a call went. I always give one star. I'm not cruel and petty – it genuinely is always one star, where with my hardware, Zoom is pretty much real-time HD audio and video. People chop in or chop out, or I hear fan noise, or I wasn't able to join the link in Chrome because – well, it recognises that it's in Chrome but the version numbers don't match, and it thus asks me to download Chrome or Edge to join in Chrome – whatever. Microsoft must know that it's made a dog, and a very, very positive take would be that they want to make it better by having a larger base of competent developers to call upon to basically bug and betatest.
If I worked there, I'd have jumped ship at the acquisition, however...
Exactly, thank you so much for putting the experience into words. The random errors are just so... astoundingly random.
I've had problems with a headset on USB-C (Sennheiser EPOS). Teams would connect once, and then never again. Another time it would stop responding to any click. Then it would reload whenever changing team/channel. And then it would no longer show any notifications.
I haven't seen an app with that amount of apparently completely random errors, often not reproducible either. It's like a Bethesda game.
On a regular basis I use Teams, Zoom, and Jitsi for video conferencing. Teams isn't the most troublesome of the set.
> I'm sure this is just Microsoft unifying everyone on the same comms platform, but seriously, I don't know anyone who chooses Teams.
In big companies like GitHub people don't typically choose their own video conference platform. It's picked for them.
For Microsoft I can see a huge benefit to using Teams at GitHub. That's cost. Microsoft can use Teams at cost. That's a better price that those outside Microsoft can get it. It's a better deal than paying for Zoom. At a time when expenses are being cut it's hard to justify paying for a competitors platform.
In my Big Company (400,000+ employees) it is not permitted to use other systems to host meeting without authorization. Using unapproved software is strictly not allowed. There can be plenty of reasons including business data security, legal data retention, international data privacy laws, licensing agreements, etc.
I remember when Skype was considered controversial because indication of working status and access to employees after business hours was potential a violation of workers rights and privacy laws.
Huh, it was but I can't take credit for being clever about it, just simple order of magnitude estimation with a factor: e.g. I thought of a handful I was pretty sure were 500k+, rounded up to 10, then doubled as I know I don't know much about the space.
I’m trying to understand a world where employees at a large org will install a random comms client instead of just using what’s already installed. I worked at a place that used IBM sametime until they migrated off lotus notes. No one used anything else, because why? You’d need to convince every other person you wanted to talk to use it too, was much easier to just use the existing app.
Often these tools are adopted by smaller teams for their communication. So official company meetings would happen over teams, but bitching at Bob because he still hasn’t reviewed your PR would happen in Slack. It can make information management a nightmare, especially when users start sharing files with each other via Slack.
Most big co’s make it pretty clear to employees that it’s a pretty big no-no to discuss proprietary work through an unaudited 3rd party (especially a competitor).
Big Company can make it very difficult to use an alternative to the preferred tool. Blocking network traffic or restrictions on what binaries can be installed on corporate computers can be very effective at keeping the team using the same chat client.
I is not just video conferencing. Teams sux if you need many ad-hoc working groups based on topics or small teams. Meaning, they sux for exactly the kind of work developers do.
Yes. Zoom executives directly working for china to disrupt users calls that were anti-china. They later formally added Chinese-gov related takedowns in their TOS.
Before the pandemic, apple had to remove their Mac app because it was basically a virus.
Google and Amazon aren't perfect, but you can use their video calls without worrying that an executive will dislike the content of your call and mess with you. And they don't use anti-patterns to try and route you from the web app to installing a virus.
I'm too old to put up with software that doesn't work solely because it is closed-source and so doesn't accept patches :P Trying to get rid of the Discord client at the moment...
I think the biggest problem with Teams is the leadership's strategy.
They seem to be focusing on adding as many glossy features as quickly as possible. The quality, performance and consistency doesn't seem to be a point of consideration at all. Yet for a user who spends a lot of time in it each day, I really don't care about animated waves on top of my video, or this together mode. I just want it to work and not take ages to open, show stale status info until I click on a user, have choppy video, running my mac's fans on full blast, and be so cluttery that it's almost impossible to find back information we shared in chats in the past. And the worst thing for me: the information density is so low. These big bubbles around everything seem to care more about looking pretty than to actually show information and cause way too much scrolling. Slack does this so much better.
The Teams guys can learn so much from their VS Code colleagues. It's really weird how one company can produce one of the worst infamous electron apps and also the gold standard best one at the same time.
Ps and please, tell me what's going wrong. "Something went wrong" is ridiculous. And let me log into multiple tenants at the same time without switching.
Pro tip: go to settings. In general, there is a "Chat density" setting. Change that to "Compact". Wish this was the default because it is so much better!
It's CDD, checklist driven development. When you are competing against another product in the enterprise space the people making the decision probably won't be using what you are selling often but they will look at the list of features, so money spent adding new features is more important and higher prioritized than stability reliability or any other nonfunctional requirement.
The main selling point for teams is that it's free for existing M365 customers. Microsoft is aiming specifically at a "quick win" for IT managers to cut a competing product and replace it with something they're paying for anyway.
And usually the top would spend a lot of time in meeting themselves too, it's not something they won't be using themselves.
But I know what you mean, in our company the top execs have their own support team so they don't even know how bad our outsourced support is. A lot of production issues are streamlined for them because of this.
This resonates. Microsoft does this with a lot (not all) of their products. Its also my experience here in the Netherlands, especially how Microsoft does sales. Its always management who forces everybody to use these products against their will, especially as they come for free in their 365 package. Nevermind that is a pale imitation of an MVP version of their competitor.
For the decision makers, it doesn't really matter that the consequence is loss of work satisfaction and productivity. That is _their_ problem.
My regret is not finding out about this earlier in my career, in the future tooling will be an important consideration when choosing jobs, and I'll avoid orgs who are in an iron grip by Microsoft like the plague.
As someone who just moved to The Netherlands, this bugs me as well. Many of my interviews happened over Teams, which should have been red flag. Luckily, my IT department is small enough that we can use Discord for general chat and some video calls, if we don't need high resolution, but we still have to use Teams to interact with the rest of the company.
I recently tried edge to use the new Bing and I feel the same way. So many features, half of them buggy, and zero coherence throughout the app. (That said, few outright bugs although I assume I mostly have google to thank for that.) Not sure what it is about MS and features features features.
Fwiw, I do know quite a few companies outside the tech space that use, and even love teams. These are the ones where I have personally gotten to see how they use teams:
2 car sale companies. 1 Accounting shop. 1 ultra large company that owns multiple types of businesses across 4 different countries. A few real estate businesses. The vast majority of people using it have said they are happy and several have even been excited to show off their setups. All of them said that it allowed them to go remote during the pandemic and has helped them stay flexible once movement and office work started returning to normal.
The way they use it is also interesting. It’s not just chat. They integrate a whole host of apps directly into it including sales pipeline tools, document management, and project management. We are not talking about chat bots either. We are taking integrated apps+interfaces. All their meeting happen in MS teams itself. The multinational has a non trivial arrangements using multiple instances where some are separated for client convos and others are for different teams.
I use the term interesting because the way they use it is nothing like how I’ve ever seen any slack being used. And I don’t think I’ll ever see slack supporting that. Putting it out there because I think there’s a world of use cases for ms teams that doesn’t involve people being forced to use a bad thing.
Yes, that's how most people use MS Teams. It's extremely useful. The sharing space for documents is good. Collaborative editing in Office is good. Conferencing is good. Presenting from Powerpoint is good.
It's the same thing in every discussion involving Teams: most HN readers don't realise just how unusual the tools they use and what they want are with regards to the average job.
> most HN readers don't realise just how unusual the tools they use and what they want are with regards to the average job.
Probably because it literally doesn't matter what experience other have. The only thing that matters to me about a tool is if it's helping or hurting.
Teams does both, which I consider damning because there are other tools available that are much lighter on the "hurting" side.
But companies don't actually seem to care about things like that. They see that they are getting Teams "for free", and so that's what we're forced to use.
>it's telling that Microsoft has to require folks to use Teams.
Not really I think, considering that MS bought GitHub, with all of its already existing culture. I see this move as them homogenizing the infrastructure. And also not willing to pay to another company for a product that's actually their competition.
Of course, and I can't fault Microsoft for that, even if (having been on the bad end of a similar acquisition and IT merge) it sucks for GitHub.
My point was: GitHub as an organization didn't choose Teams willingly, and are still paying for Slack and only using Teams for video conferencing. Of all the explanations of why that might be, the easiest to land on is "because Teams just isn't that good."
Or it could just be that "this is the way we've always done it" / that Slack has enough inertia in the organisation that changing workflow, updating shortcuts, etc. is inconvenient and thus left until the last minute to change over.
I haven't used slack but broadly speaking Teams / Zoom / all of the other platforms I've used have been roughly the same, in that they all get the job of text and (usually) video communications done. Some might be a bit nicer to use than others but largely it doesn't matter which one you use as long as everyone is on the same platform.
In my previous workplaces I have used primarily Slack, with teams for larger video calls.
In my current work, it’s 100% teams. I’d be doing in my heels over swapping to teams as well if I was GitHub, using it for any kind of text communication is such a massive downgrade.
Given how human psychology works, Occam's razor would suggest the most likely explanation is "it's not sufficiently better to overcome the inertia of everyone already being accustomed to what they're using".
HN of all places should know better than to believe that people using one thing instead of another must mean the other thing is worse.
Even if they were paying 10$ per user, it's basically the salary of one non-lower level engineer to have cross-company communication they are already used to.
It's the premise of paying your competitors money to do something that you already are doing. Someone manager probably got a bonus out of this move.
And corporate espionage is more prevalent than you may think. Can you absolutely guarantee 100% that the competitor, like Zoom in China, isn't sniffing their video traffic or recordings to some of their most sensitive internal business discussions?
I have a surface pro 7,running windows 11. Video calls in teams will reliably cause an actual BSOD (blue screen of death for the younger readers) , and my other coworker with the same hardware experiences the same issue. I don't have issues in zoom or Google meet, but teams video calls will reliably cause an honest to God BSOD. The first couple of times it happened I was a little bit nostalgic, I hadn't seen that since XP and even with XP it was rare.
I don't know how I could have a better environment to run teams, I'm using Microsoft hardware, running a Microsoft camera driver, on a Microsoft OS, using a Microsoft messaging platform. Absolutely ridiculous, thankfully we only have one external vendor who uses teams. Ironically, that vendor is an oracle provider, so their whole existence just revolves around hideous software.
Teams makes no sense. The audio on it is objectively terrible for calls. Zoom is crystal clear [Its UI has issues though] but on teams I have to actually move my head closer to the computer speakers. Another interesting aspect of it is that it appears actually limit the volume. I plug in better speakers during a call and no change. On zoom it just auto sense the new speakers and boom louder.
I don't understand why teams constantly asks me if I want to upload a new version of a document when I upload a document, the answer is always yes, that's why I am uploading it. I dont care what you do with the original. I am sure there is a way to fix this but I don't have the energy to figure it out, I just want it to work like slack. Why does it have a separate notification area for people using emojis that makes me click outside of the chat to remove the notification?
The entire thing is so far removed from intuitive and "it just works" that I almost think its intentional.
You’ve mentioned this app before, last I checked there isn’t a usable client yet. Are you just hoping to get more FOSS devs on board or has something changed?
Yes, and I don't want to follow that because most of the companies restrict third party apps. This means that you can't have a custom client for a good percentage of the companies.
> it's telling that Microsoft has to require folks to use Teams.
Most people, when they get comfortable with something, are averse to change. Even if the thing they would change to is better than what they have, they still won't want to learn a new thing.
Not saying that's the case here. But without real data to lead a decision, people just do what they prefer, rather than what's better. (I use both Slack and Teams daily within my company, and while I prefer Slack, I could probably learn to deal with Teams)
A large organization may have to force a change in order to reap benefits. And there are significant benefits to unifying communications. There are also workarounds and alternative solutions, so you usually need to invest some serious time in evaluating a huge switch to know what's best. Or, you can just make a decision, pull the trigger, and live with the consequences (an "executive decision")
People love change. Look how Google, Amazon, and social media took the world by storm. And pocket calculators, smart phones, large screen TV's, the list goes on.
People like changes that benefit them. On the other hand, probably the #1 user request for enterprise software is: Please don't change anything. This tells you something about the software. Common reasons are:
* The last major change broke everything and we couldn't do our jobs for weeks.
* The new system has nobody to help us when there's a problem.
* All of our files / contacts / messages disappeared, or we can't find them any more.
* Meanwhile, we're still expected to keep up the pace of work.
Another way to put it is, people follow the path of least resistance. If they can still make an old thing work, and adopting the new thing seems like a chore, they'll keep the old thing. OTOH, if they see the new thing will greatly ease their life, the path of least resistance is the new thing.
Of course they mostly do this for short-term personal gains. For new solutions that seem like a chore to adopt and only have long-term gains, they won't want to.
Change is painful. It's costly, and comes with risk.
People will embrace it happily, though, when the benefit of doing so clearly exceeds the pain it brings. When you see people resisting change, it's because they don't see that great of a benefit.
* The workflow to use your software is so unintuitive I've had to learn it by muscle memory
People don't mind when a nice discoverable UI adds some more features. They don't like it when a horrible poorly thought out one that they had to painfully learn is suddenly different. That button that used to be hidden in the edit menu? Now it's on a ribbon menu which has to be expanded. Good luck hunting!
An interesting lesson is to visit a user site and notice the step-by-step instructions, written out and taped to the sides of monitors, or pinned to cubicle walls. Those instructions were often hard-earned, either written by the user or shared among workers. That stuff becomes obsolete if the software changes. Likewise for online instructions, such as blogs that give instructions in the form of screen captures and text, and that are no longer valid. If the unofficial instructions are better than the official ones, or more legible than the GUI, then it's a step backwards for users.
For enterprise software, people don't like solving a problem twice. Responding to the change doesn't move forward current goals, so any change is bad -- the current problems have been mitigated, so even something that gets rid of the old problems at the expense of needing to to some work to integrate, with uncertainty about new problems is bad.
Really? 2.5 years of "nothing but poor experiences"? I have to ask: what the hell is wrong with your computer?
I started using Teams heavily in 2020 and yes, back then it was unequivocally the worst of the bunch. But I've seen it make great strides, and it's been many months, perhaps years since I've really had any problems with it. Audio, video, screen sharing, PSTN dial-in, whiteboard, call handoff between mobile app and desktop... it all works fine. I'm not using anything special. Just a run-of-the-mill MacBook Air. They started pushing out an Arm-native build for macOS last year and that really solved the slow perf and battery drain. As long as you stay reasonably current with updates, Teams works just fine.
It's just lazy at this point to bash Teams "because".
I have to ask: what the hell is wrong with your perception?
What other IM/AV applications have you used before, and on what hardware?
If you've never seen better, it's not surprising that you'd think Teams is good.
I vividly remember that 20 years ago there was MSN Messenger, and the experience was far better than that of Teams on hardware at least an order of magnitude less powerful. After that was (pre-MS) Skype, which was also not that bad.
Teams is an absolute pig in comparison. It works --- just barely. Audio and video calls are probably what it does best, and "best" is relative. For IM, it's beyond horrible.
Really? I use zoom for hours and it regularly eats my PC.
If teams works just barely there is something wrong with your setup. None of the chat/im applications work barely. They work, to a point, then crash/glitch/etc.
Let’s see, Slack IM notifications on mobile are garbage, I’m just going to assume that I will miss them at this point. Slack audio is an actual joke, we redirect it to teams…
Zoom, eats our computers. We screen share 3-4 hours someone’s screen will turn black because the video card driver will crash, daily.
Google, does mostly ok for calls without sharing and one person speaking. Limited support for physical devices, calls, etc. noise suppression is minimal so having more than one person unmuted is going to blow an ear drum. Limited features.
GoTo, as long as you are using it as a phone and not an app, you can make a call hurray!
Webex is crap end to end, but you will join a session and get through a call, mostly without blowing up. Feature rich? No.
Teams, let’s see random client gui crashes, but the call stays up so you can keep talking but can’t do anything? Probably weekly occurrence on the current release. Sensitive to https termination in networks and offload, oh hell yes. Forgets camera mappings just like slack every time? Yes and yes. Continues ringing on your phone if you pickup on desktop. Yes just like zoom, but at least it can transfer to desktop, most of the time.
I use all of them, for clients, daily, weekly, monthly. Teams by far is the most feature rich client. Somewhere in the middle of consistency, stability, etc.
Is it magical and poops unicorns? No but ya’ll need to stop being dramatic. They all work, mostly, some have cool features, some are dead in the water like slack, WebEx, GoTo, etc. MS is dumping tons of money into developing teams, they are going to break a lot of eggs, less than in 2020 but still enough to make a mess.
I ran teams on a fairly powerful i9-equipped MacBook Pro and it was unbearable. Couldn't even scroll up to older messages without it freezing up, the integrations with Office 365 were also really slow.
Umm, teams video will reliably cause my surface pro 7 to actually BSOD. Not exaggerating, having the video turned on will cause a no-shit BSOD on Microsoft hardware, and my coworker with the another surface also has that issue.
On my desktop with a no-name webcam with a generic driver I have no issues with teams, but the complaints about the software are rooted in actual problems. It's inconsistent, I've never seen a program that can have so many different bugs. We have 2 users who regularly use teams to communicate with an external vendor, they are on exactly identical Dell desktops. One has no issue at all, the other will see massive performance issues when teams is running in the background, somehow it is saturating disk read on a ssd. It would be hard to have more identical systems, they are running the same OS image with the same programs installed, the only thing that's different is the user account they are signed into. I finally just gave up and swapped the user's entire system with a spare, which resolved the issue but as far as I can tell there are no problems with that system. I pulled it out of spares 2 months ago for a new employee and they have not reported any issues at all, and I have followed up with them out of curiosity.
Teams is just super inconsistent, so the fact that it works on 1 computer for you doesn't in any way invalidate all the other problems people report with it.
Edit: unfortunate autocorrect typo substituting a slur for the worn 'new'
Nothing, because Zoom works completely fine while Teams is always struggling to make a connection, keep a connection, have good quality video and audio, and well even OPEN. There is a persistent problem on Macs where Teams refuses to open unless you delete an obscure file.
True, Teams isn't as awful as it was in 2020. But it's still pretty bad, and it's a bad that is foisted on many of us. That makes putting up with it even more irritating than it might otherwise be.
> I have to ask: what the hell is wrong with your computer?
For our org, it isn't the PC, it is the org controlling Teams and the firewall.
They have Teams configured somehow, that none of the subsidiaries can talk to each other, you can't talk to outside people or invite them, and the performance is really bad due to the way the clients and firewall are configured.
I'm guessing lots of orgs are in the the same boat because Teams is newer than Zoom, Web Ex, or Go to meeting, and for these legacy meeting clients the PCs, firewall, and org has had time to optimize for them.
Teams is more of an afterthought because it comes with 365.
I sort of agree, teams are working really well for us, a medeium sized globally dispersed team of developers.
I do have one pain-point: my phone discord sometimes does not understand that it should be quiet if I am active on a computer at the same time, and dings for very new chat message.
In my personal life, I got all my friends on Discord and we've been really happy with that. It's screen sharing seems tuned for video games, though, and sucks for sharing non-game applications. We all set up Teams accounts because it does so much better with screen sharing.
I haven't tried Slack recently but I really disliked that I needed a different account on each server and the UI didn't seem to unify the servers together in a convenient way like discord does. I'm pretty sure this is for enterprise support reasons and that it is by design, but it's still annoying for use when I'm just trying to talk to all of my circles of friends in one place.
At work we use Teams and I have zero complaints so far. We've been using it since either 2019 or 2020. It's so much better than Zoom + Mattermost or, before that Skype for Business / Lync. shudder
Conversely, my friend group is mostly on Discord for group chats, and it’s the least favourite chat program I regularly use.
I’m not interested in the constant upsells to their paid service, and I find the client fairly unpleasant to use with its lack of configurability (like I can’t even make the window as narrow as I want, there’s a minimum width to it that’s way too wide).
And Discord’s pretty hostile to third-party clients, so I’m stuck using this client that I don’t like.
I use Slack with several customers and it must keep them separate. Nobody would use it if it merged different companies into a single chat. For non business usage, maybe a unified chat could make sense even if I prefer to keep different groups separate as with Whatsapp groups or Telegram channels.
we use discord at work and the only bad thing is the screen sharing, many times we can get away with it but if we can't we usually just jump on a free zoom call
We changed from teams to google meet when we otherwise do everything in 365 just for the video conferencing. Teams was so bad, it was costing us a lot of frustration and time, working mostly remote it seriously impeded our ability to do work.
We still use teams for chat, but it always bogs my mind how broken it is, especially for devs. The last time I used slack for work was maybe 7 years ago, I still miss that. Being forced to move to teams killed any sense of online community in the workplace, biggest example of how tools can actually shape culture.
>there's a difference between "I chose an email/identity platform that I know and I guess I'll use its chat app too" and "I evaluated team chat offerings and Teams is our top pick.”
You literally just described the strategy for every Microsoft software ever. MS Word (vs Word Perfect) comes to mind
I used to work at LinkedIn (owned by microsoft) and about 2 years ago they announced engineering was switching from zoom/slack to teams. There was significant uproar and LinkedIn leadership ended up reversing the decision. To my knowledge LinkedIn engineering is still on zoom/slack.
Easy, MSFT throws it in for free when you buy other products. That's why there is no incentive to make it better. If you eliminate slack or anything else, you get to say "See, by buying Microsoft we're getting our needs met and saving money". Same thing for GitHub, that's just a freebee depending on your spend at this point.
I remember the first 1-2 years after the iPhone was released, I see Microsoft employees (I mostly see sales and presales folks) using their iPhones under the table or when the meeting attendees from other companies aren't around. Heard they were supposed to use them.
I get the option between Teams, Skype and Zoom. Teams by far is the best of the bunch, even on Linux. Skype is just a complete non-starter and Zoom's UI is so horrible it's surprising the company is valued as high as it is.
That's a lot of naivette. Corporations _require_ we use stuff everyday. And when that stuff is something you build yourself, it makes zero sense that github is still using something else.
Yep, it’s bundled with office 365 so I’m gonna assume that’s why most companies use it. Which is why we use it at work. I’m not a fan either. Desktop app just seems slow as hell on my Mac.
Are your company’s MS instances on premise or cloud based? The reason I ask is because nearly all of the negative experiences I’ve had with Microsoft stem from either their native apps or poorly configured on premise servers.
This is anecdotal since there’s no wide ranging data, but Teams works just fine where I work. Integration with Outlook is also great, but we have a O365 plan that is completely managed and run from the cloud.
"He cornered me & threatened me. If he has the audacity to give me the death stare ON camera, picture what it’s like OFF camera. I was pulled out of the game & forced to speak to him in a dark hallway."
> It's not marginally +EV to call given the pot odds and she doesn't need to know her exact equity to be cheating, she could be getting binary "you're good" / "you aren't good" signals (and that's much easier to transmit and read without being noticed). [...] The play on the turn is way too ridiculous.
I watched the hand. I've been in that spot. You think you have a good read on someone such that your not-great hand may actually be best, say, they're chasing an open-ender and your high card is good. (Which it would be, barring cards folded by other players.) You're nervous, you're facing an all-in decision, you know the safe move is to fold and move on, but damn all you just can't shake the feeling, and you have to gamble.
Poker player trusting a soul read is the Occam's Razor answer here.
You didn't say this, but I fear that if this were someone like Daniel Negraneau monologuing about what to do and stating his read aloud ("I'm pretty sure you're chasing. I'm probably beat but if you're holding 8 9 I'm beating you right now. I'm right aren't I? sigh I call. Show me I'm beat.") then this wouldn't be an issue at all.
> The other thing is why did she return the money after the hand if she won it fairly?
"He cornered me & threatened me. If he has the audacity to give me the death stare ON camera, picture what it’s like OFF camera. I was pulled out of the game & forced to speak to him in a dark hallway."[twt]
Shit, if I was in that position, and producers/officials pulled me out of the game and put me in a room with my opponent where I was accused of cheating and told to return the money, damn good chance I'd return it just to make a scary situation stop. Maybe you're different, but that doesn't mean intimidation doesn't work pretty damn well.
Right but chasing what - she can't beat J8, QJ, KQ, any high flush draw. Let alone value hands (she's drawing dead against a 10). So she has to put him specifically on 78 (which he had), 68 or 67. That makes it more suspicious, not less.
The intimidation thing makes sense if you believe her story. I can only guess what happened between the two of them after the game. I know that I've seen her change her own story about the hand that she thought she had. If you're questioning someone's credibility, you can't really go with their own story about what happened as a reason to exonerate them.
> Right but chasing what - she can't beat J8, QJ, KQ, any high flush draw.
Fair, there are definitely semi-bluffs that beat J4 here, and of course if Garret had a made hand she's toast.
There's also (and I think she says this at some point, "purely a bluff-catcher" IIRC) tons of pure air bluffs that J4 beats.
I simply do not understand the stance that if a poker player makes a call with a crap hand that turns out to be good, and then holds up, that she must be cheating. Not for nothing, but I have never heard such accusations leveled at a male player like this.
> If you're questioning someone's credibility, you can't really go with their own story about what happened as a reason to exonerate them.
I am not questioning her credibility.
Because there is no reason to.
The only reason anyone thinks she might be cheating is because her heads-up opponent angrily accused her of it. Not even after a bad beat! After his hand, which was never ever the best hand of the two, failed to make any of its draws over a river that was run twice.
I question the credibility of the accusation. Garret has every reason to accuse his opponent of cheating, especially given that he successfully intimidated Robbi into returning his chips, which to me is just insane.
Why the conversation isn't "poker player doesn't catch his outs, demands his money back" is beyond me.
But again it’s not uncommon to believe your opponent is chasing a card especially in this case - straight and flush draw and the straight is below your top card. If it’s a straight draw they have, you know you have them beat. And going all in pre-river on your draws is the main tactic you’d expect to see.
The problem with the cheating is that it requires her to be (1) smart enough to not get caught cheating in a high stakes poker game and (2) dumb enough to cheat in a ridiculously obvious way. Those seem mutually contradictory. If you can pull off the cheat you'd be smart enough to cheat in a less obvious way.
Oh no, the stuff I put in quotes was not Robbi speaking during the hand, but my imagining of what Neganeau would have sounded like in his typical confident chattery thinking aloud.
If she didn't say anything like this, I guess I don't understand why you said you "fear" that this wouldn't be happening if Negraneau had made the same moves and had narrated precisely what cards he put his opponent on.
My sense is that the different reaction would be because of the narration, not the identity of the player (though it would also make sense that a player with a decade(s) long reputation would be given more deference than a newer player).
Daniel Negreanu is a player who is renowned for making insane reads like the GP is roleplaying, narrating them live. GP is—I believe—saying that if Negreanu had made the same play and narrated it this way, nobody would have batted an eye.
I think the reality is she just made a shitty decision in the moment that happened to play out well. Anyone who's played poker long enough has done the same dozens of times.
+1 on "well defined spec" -- a lot of Healthcare integrations are specified as "here's the requests, ensure your system responds like this" and being able to put those in a test suite and know where you're at is invaluable!
But TDD is fantastic for growing software as well! I managed to save an otherwise doomed project by rigorously sticking to TDD (and its close cousin Behavior Driven Development.)
It sounds like you're expecting that the entire test suite ought to be written up front? The way I've had success is to write a single test, watch it fail, fix the failure as quickly as possible, repeat, and then once the test passes fix up whatever junk I wrote so I don't hate it in a month. Red, Green, Refactor.
If you combine that with frequent stakeholder review, you're golden. This way you're never sitting on a huge pile of unimplemented tests; nor are you writing tests for parts of the software you don't need. For example from that project: week one was the core business logic setup. Normally I'd have dove into users/permissions, soft deletes, auditing, all that as part of basic setup. But this way, I started with basic tests: "If I go to this page I should see these details;" "If I click this button the status should update to Complete." Nowhere do those tests ask about users, so we don't have them. Focus remains on what we told people we'd have done.
I know not everyone works that way, but damn if the results didn't make me a firm believer.
The problem I’ve run into is that when you’re iterating fast, writing code takes double the time when you also have to write the tests.
Unit tests are still easy to write but most complex software have many parts that combine combinatorially and writing integration tests requires lots of mocking. This investment pays off when the design is stable but when business requirements are not that stable this becomes very expensive.
Some tests are actually very hard to write — I once led a project that where the code had both cloud and on-prem API calls (and called Twilio). Some of those environments were outside our control but we still had to make sure they we handled their failure modes. The testing code was very difficult to write and I wished we’d waited until we stabilized the code before attempting to test. There were too many rabbit holes that we naturally got rid of as we iterated and testing was like a ball and chain that made everything super laborious.
TDD also represents a kind of first order thinking that assumes that if the individual parts are correct, the whole will likely be correct. It’s not wrong but it’s also very expensive to achieve. Software does have higher order effects.
It’s like the old car analogy. American car makers used to believe that if you QC every part and make unit tolerances tight, you’ll get a good car on final assembly (unit tests). This is true if you can get it right all the time but it made US car manufacturing very expensive because it required perfection at every step.
Ironically Japanese carmakers eschewed this and allowed loose unit tolerances, but made sure the final build tolerance worked even when the individual unit tolerances had variation. They found this made manufacturing less expensive and still produced very high quality (arguably higher quality since the assembly was rigid where it had to be, and flexible where it had to be). This is craftsman thinking vs strict precision thinking.
This method is called “functional build” and Ford was the first US carmaker to adopt it. It eventually came to be adopted by all car makers.
> Some tests are actually very hard to write — I once led a project that where the code had both cloud and on-prem API calls
I believe that this is a fundamental problem of testing in all distributed systems: you are trying to test and validate for emergent behaviour. The other term we have for such systems is: chaotic. Good luck with that.
In fact, I have begun to suspect that the way we even think about software testing is backwards. Instead of test scenarios we should be thinking in failure scenarios - and try to subject our software to as much of those as possible. Define the bounding box of the failure universe, and allow computer to generate the testing scenarios within. EXPECT that all software within will eventually fail, but as long as it survives beyond set thresholds, it gets a green light.
In a way... we'd need something like a bastard hybrid of fuzzing, chaos testing, soak testing, SRE principles and probabilistic outcomes.
>I believe that this is a fundamental problem of testing in all distributed systems: you are trying to test and validate for emergent behaviour. The other term we have for such systems is: chaotic. Good luck with that
Emergent behaviour is complex, not chaotic. Chaos comes from sensitive dependence on initial conditions. Complexity is associated with non-ergodic statistics (i.e. sampling across time gives different results to sampling across space).
I work in Erlang virtual machine (elixir) and I am regularly writing tests against common distributed systems failures? You don't need property tests (or jeppsen maelstrom - style fuzzing) for your 95% scenarios. Distributed systems are not magically failure prone.
> TDD also represents a kind of first order thinking that assumes that if the individual parts are correct, the whole will likely be correct. It’s not wrong
In fact it is not just wrong, but very wrong, as your auto example shows. Unfortunately engineers are not trained/socialised to think as holistically as perhaps they should be.
The non-strawman interpretation of TDD is the converse: if the individual parts are not right, then the whole will probably be garbage.
It's worth it to apply TDD to the pieces to which TDD is applicable. If not strict TDD than at least "test first" weak TDD.
The best candidates for TDD are libraries that implement pure data transformations with minimal integration with anything else.
(I suspect that the rabid TDD advocates mostly work in areas where the majority of the code is like that. CRUD work with predictable control and data flows.)
Yes. Agree about TDD being more suited to low dependency software like CRUD apps or self contained libraries.
Also sometimes even if the individual parts aren’t right, the whole can still work.
Consider a function that handles all cases except for one that is rare, and testing for that case is expensive.
The overall system however can be written to provide mitigations upon composing — eg each individual function does a sanity check on its inputs. The individual function itself might be wrong (incomplete) but in the larger system, it is inconsequential.
Test effort is not a 1:1. Sometimes the test can be many times as complicated to write and maintain as the function being tested because it has to generate all the corner cases (and has to regenerate them if anything changes upstream). If you’re testing a function in the middle of a very complex data pipeline, you have regenerate all the artifacts upstream.
Whereas sometimes an untested function can be written in such a way where it is inherently correct from first principles. An extreme analogy would be the Collatz conjecture. If you start by first writing the test, you’d be writing an almost infinite corpus of tests — on the flip side, writing the Collatz function is extremely simple and correct up to large finite number.
Computer code is an inherently brittle thing, and the smallest errors tend to cascade into system crashes. Showstopper bugs are generated from off-by-one errors, incorrect operation around minimum and maximum values, a missing semicolon or comma, etc.
And doing sanity check on function inputs addresses only a small proportion of bugs.
I don't know what kind of programming you do, but the idea that a wrong function becomes inconsequential in a larger system... I feel like that just never happens unless the function was redundant and unnecessary in the first place. A wrong function brings down the larger system feels like the only kind of programming I've ever seen.
Physical unit tolerances don't seem like a useful analogy in programming at all. At best, maybe in sysops regarding provisioning, caches, API limits, etc. But not for code.
> I don't know what kind of programming you do, but the idea that a wrong function becomes inconsequential in a larger system... I feel like that just never happens unless the function was redundant and unnecessary in the first place. A wrong function brings down the larger system feels like the only kind of programming I've ever seen.
I think we’re talking extremes here. An egregiously wrong function can bring down a system if it’s wrong in just the right ways and it’s a critical dependency.
But if you look at most code bases, many have untested corner cases (which they’re likely not handling) but the code base keeps chugging along.
Many codebases are probably doing something wrong today (hence GitHub issues). But to catastrophize that seems hyperbolic to me. Most software with mistakes still work. Many GitHub issues aren’t resolved but the program still runs. Good designs have redundancy and resilience.
A counter to that could be all the little issues found by fuzz testing legacy systems and static analysis. Often in widely used software where those issues did not indeed manifest. Unit tests also don't prove correctness, they're as good as the writer of the unit test's ability to predict failure.
I can tell you that most (customer) issues in the software I work on are systemic issues, the database fails (widely used OSS) can corrupt under certain scenarios. They can be races, behaviour under failure modes, lack of correctness on some higher order (e.g. having half failed operations), the system not implementing the intent of the user. I would say very rarely those are issues that would have been caught by unit testing. Now integration testing and stress testing will uncover a lot of those. This is a large scale distributed system.
Now sometimes after the fact a unit test can somehow be created to reproduce the specific failure, possibly at great effort. That's not really something that useful at this point. You wouldn't write that in advance for every possible failure scenario (infinite).
All that said, sometimes there's attacks on systems that relate to some corner cases errors, which is a problem. Static analysis and fuzzers are IMO more useful tools in this realm as well. Also I think I'm hearing "dynamic/interepreted" language there (missing semicolons???). Those might need more unit testing to make up for the lack of compiler checks/warnings/type safety for sure.
The other point that's often missed is the drag that "bad" tests add to a project. Since it's so hard to write good tests when you mandate testing you end up with a pile of garbage that makes it harder to make progress. Other factors are the additional hit you take maintaining your tests.
Basically choosing the right kind of tests, at the right level, is judgement. You use the right tool for the right job. I rarely use TDD but I have used it in cases where the problem can relatively easily be stated in terms of tests and it helps me get quick feedback on my code.
EDIT: Also as another extreme thought ;) some software out there could be working because some function isn't behaving as expected. There's lots of C code out there that uses things that are technically UB but do actually have some guarantee under some precise circumstances (but idea but what can you do). In this case the unit test would pass despite the code being incorrect.
I work in software testing, and I've seen this many times actually. Small bugs that I notice because I'm actually reading the code, which became inconsequential because that code path is never used anymore or the result is now discarded, or any of a number of things that change the execution environment of that piece of code.
If anything I'm wondering the same question about you. If you find it so inconceivable that a bug is hiding in working code that is held up because the calling environment around it, than you must not have worked with big or even moderately sized codebases at all.
> sometimes even if the individual parts aren’t right, the whole can still work.
And in fact, fault tolerance with the assumption that all of it's parts are unreliable and will fail quickly makes for more fault tolerant systems.
The _processes and attitude_ that cause many individual parts to be incorrect will also cause the overall system to be crap. There's a definite correlation, but that correlation isn't about any specific part.
Yes. Though my point is not that we should aim for a shaky foundation, but that if one is a craftsman one ought to know where to make trade offs to allow some parts of the code to be shaky with no consequences. This ability to understand how to trade off perfection for time — when appropriate — is what distinguishes senior from junior developers. The idea of ~100% correct code base is an ideal — it’s achieved only rarely on very mature code bases (eg TeX, SQLite).
Code is ultimately organic, and experienced developers know where the code needs be 100% and where the code can flex if needed. People have this idea that code is like mathematics where if one part fails, every part fails. To me if that is so, the design too tight and brittle and will not ship on time. But well designed code is more like an organism that has resilience to variation.
If individual parts being correct meant the whole thing will be correct, that means if you have a good sturdy propeller and you put it on top of your working car, then you have a working helicopter.
> writing code takes double the time when you also have to write the tests
this time is more than made up for by the usual subsequent loss of debugging, refactoring and maintenance time, in my experience, at least for anything actively being used and updated
Yes, if you were right about the requirements, even if they weren't well specified. But if it turns out you implemented the wrong thing (either because the requirements simply changed for external reasons, or because you missed some fundamental aspect), then you wouldn't have had to debug, refractor or maintain that initial code, and the initial tests will probably be completely useless even if you end up salvaging some of the initial implementation.
form a belief about a requirement
write a test
test fails
write code
test fails
add debug info to code
test fails no debug showing
call code directly and see debug code
change assert
test fails
rewrite test
test succeed
output test class data.. false
positive checking null equals null
rewrite test
test passes
forget original purpose and stare at green passing tests with pride.
On a more serious note: just learn to use a debugger, and add asserts, if need be. To me TDD only helps having something that would run your code - but that's pretty much it. If you have other test harness options, I fail to see the benefits outside conference talks and books authoring.
Yes, so much this. I don’t really understand how people could object to TDD. It’s just about putting together what one manually does otherwise. As a bonus, it’s not subject to biases because of after-the-fact testing.
>at least for anything actively being used and updated
This implies that the strength of the tests appears when it's modified?
Like the article says, TDD doesn't own the concept of testing. You can write good tests without submitting yourself to a dogma of red/green, minimum-passing (local-maximum-seeking) code. Debating TDD is tough because it gets bogged down with having to explain how you're not a troglodyte who writes buggy untested code.
And - on a snarkier note - this is a better argument against dynamic typing than for TDD.
I can't remember the last time the speed at which I could physically produce code was the bottleneck in a project. It's all about design and thinking through and documenting the edge cases, and coming up with new edge cases and going back to the design. By the time we know what we're going to write, writing the code isn't the bottleneck, and even if it takes twice as long, that's fine, especially since I generally end up designing a more usable interface as a result of using it (in my tests) as it's being built.
> The problem I’ve run into is that when you’re iterating fast, writing code takes double the time when you also have to write the tests.
The times I have believed this myself, often turned out to be wrong when the full cost of development was taken into account. And I came back to the code later wishing I had tests around it. So you end up TDDing only the bug fix and exercising that part of the code with the failing test and then the code correction.
> The problem I’ve run into is that when you’re iterating fast, writing code takes double the time when you also have to write the tests.
That was the time it took to actually write working code for that feature.
The version of "working code" that took 50% as long was just a con to fool people into thinking you'd finished until they move onto other things and a "perfectly acceptable" regression is discovered.
The reason someone is iterating fast is usually because they are trying to discover the best solution to a problem by building things. Once they have found this then they can write "working code". But they don't want to have to write tests for all the approaches that didn't work and will be thrown away after the prototyping phase.
There are two problems I've seen with this approach. One is that sometimes the feature you implemented and tested turns out to be wrong.
Say, initially you were told "if I click this button the status should update to complete", you write the test, you implement the code, rinse and repeat until a demo. During the demo, you discover that actually they'd rather the button become a slider, and it shouldn't say Complete when it's pressed, it should show a percent as you pull it more and more. Now, all the extra care you did to make sure the initial implementation was correct turns out to be useless. It would have been better to have spent half the time on a buggy version of the initial feature, and found out sooner that you need to fundamentally change the code by showing your clients what it looks like.
Of course, if the feature doesn't turn out to be wrong, then TDD was great - not only is your code working, you probably even finished faster than if you had started with a first pass + bug fixing later.
But I agree with the GP: unclear and changing requirements + TDD is a recipe for wasted time polishing throw-away code.
Edit: the second problem is well addressed by a sibling comment, related to complex interactions.
> Say, initially you were told "if I click this button the status should
> update to complete", you write the test, you implement the code, rinse and
> repeat until a demo. During the demo, you discover that actually they'd
> rather the button become a slider, and it shouldn't say Complete when it's
> pressed, it should show a percent as you pull it more and more. Now, all the
> extra care you did to make sure the initial implementation was correct turns
> out to be useless.
Sure, this happens. You work on a thing, put it in front of the folks who asked for it, and they realize they wanted something slightly different. Or they just plain don't want the thing at all.
This is an issue that's solved by something like Agile (frequent and regular stakeholder review, short cycle time) and has little to do with whether or not you've written tests first and let them guide your implementation; wrote the tests after the implementation was finished; or just simply chucked automated testing in the trash.
Either way, you've gotta make some unexpected changes. For me, I've really liked having the tests guide my implementation. Using your example, I may need to have a "percent complete" concept, which I'll only implement when a test fails because I don't have it, and I'll implement it by doing the simplest thing to get it to pass. If I approach it directly and hack something together I run the risk of overcomplicating the implementation based on what I imagine I'll need.
I don't have an opinion on how anyone else approaches writing complex systems, but I know what's worked for me and what hasn't.
Respectfully, I think the distinction they're making it that "writing ONE failing test then the code to pass it" is very different than "write a whole test suite, and then write the code to pass it".
The former is more likely to adapt to the learning inherent in the writing of code, which someone above mentioned was easy to lose in TDD :)
One of the above comments mentions BDD as a close cousin to TDD, but that is wrong as TDD is actually BDD as you should only be testing behaviours, which allow you to "fearlessly refactor"
I don't think TDD gets to own the concept of having a test for what you're refactoring. That's just good practice & doesn't require that you make it fail first.
This falls under the category of problems where verifying, hell describing, the result is harder than the code to produce it.
Here’s how I would do it. The challenge is that the result can’t be precisely defined because it’s essentially art. But with TDD the assertions don’t actually have to actually live in code. All we have to do is make incremental verifiable progress that lets us fearlessly make changes.
So I would set up my viewport as a grid where in each square there will eventually live a rendered image or animation. The first one blank, the second one a dot, the third a square, the fourth with color, the fifth a rhombus, the sixth with two disjoint rhombuses …
When you’re satisfied with each box you copy/paste the code into the next one and work on the next test always rendering the previous frames. So you can always reference all the previous working states and just start over if needed.
So the TDD flow becomes
1. Write down what you want the result of the next box to look like.
2. Start with the previous iteration and make changes until it looks like what you wanted.
Using wetware test oracles is underappreciated. You can't do it in a dumb way, of course, but with a basic grasp of statistics and hypothesis testing you can get very far with sprinkles of manual verification of test results.
(Note: manual verification is not the same as manual execution!)
And that’s happening. The next test is “the 8th box should contain a rhombus slowly rotating clockwise” and it’s failing because the box is currently empty. So now you write code.
> tl;dr I can't understand getting mad at the Internet for being the Internet
> in 2022, it will never get better, if anything it will only get worse [...]
If I may provide an alternate tl;dr to your comment: "People are being absolute rude, disrespectful assholes; which is absolutely fine. Ron Gilbert asked for this abuse by making things. The real issue is that he disabled comments on his own blog, where people should be firmly allowed and encouraged to shit on his work."
> The question here is: "Is this a really innovative graphical design that
> will get praised by future generations but is misunderstood now" or "Is this
> just a bad decision that the creator is unwilling to own". I tend to think
> it's the latter. Ultimately the sales will tell.
The question here is not "should people like a thing," it's "why are people such assholes?" To quote Ron Gilbert's linked post:
> Play it or don't play it but don't ruin it for everyone else.
It sure seems like this is a fantastic example of the Greater Internet Fuckwad Theory [0]: nobody would speak so awfully to Ron Gilbert if they were leaning over his shoulder watching the trailer.
"continue to blame the customer?" He's never said that folks were wrong, or blamed anyone for anything. Seems to me like he just wanted folks to not be shitty.
Take, for example, this comment by someone calling themselves _Proud Retro Fascist_:
> Nice attempt at silencing critics. The game will fail because it is
> objectively hideous and you will only have yourself to blame.
>
> Inthe end all you will have achieved is killing off Monkey Island once more.
> This time permanently.
>
> All you had to do was make a game with an art style that appeals to
> everyone, whatever it might be. Instead, you opted for the most repugnant,
> revolting and hideous TRASH art style that anybody has ever bared witness to
> in a video game. You must be out of your mind.
>
> All said, try hiring good artists next time... if there is a next time, that
> is.
>
> RIP MONKEY ISLAND
I wouldn't blame anyone for being upset after receiving this kind of vile commentary "Objectively hideous;" "repugnant, revolting and hideous TRASH art style." There's nothing constructive or even reasonable there, it's purely nastiness for nastiness' sake.
Stay strong, Ron Gilbert, don't let the assholes get you down.
Yeah, that's a very harsh comment. Nonetheless, there is something to be learnt here:
If anything, the comments of the fans show that this artstyle alienates many of the fans. So, if for example the artstyle would've been showed at the very start of the development, then the developers could've listened to the fans and changed it. But everyone's in a tough spot now because development is nearly finished (game is planned for this year) and it would be too late to change course.
As such, Ron Gilbert can only do one thing: Buckle up, release the game and let it be played by those who are not alienated by the artstyle. Maybe it will charm new fans even more; maybe not. Who knows. However, it certainly is an artstyle that is not "universal". E.g. AAA developers usually refine their main character so much that he appeals more or less universally, they want someone who is liked by everyone. They don't want a main character that is only liked by 50% of the target audience. BUT! This comes with the caveat that these games might be overoptimized for the mainstream and thus being somehow boring. That's why I love what the artists and Ron Gilbert did, even though I am torn on the style myself.
Btw. I myself was even more alienated by the artstyle of Broken Age (so much that I never played it) and that was a game where I even invested money in. However, I never could've thrown words at Tim Schafer. And neither on Ron Gilbert, for that matter.
The lead artist of Return to Monkey Island worked for Double Fine previously, so it shouldn't be a surprise that if you didn't like art style that Double Fine made a "house style" you likely wouldn't be interested in what an alumni of that art style is trying to innovate at a follow up studio.
> If anything, the comments of the fans show that this artstyle alienates many of the fans.
Vocal fans are a sampling bias. Don't forget that volume in terms of loudness of complaint does not equal volume in terms of number of complainers. It may not be that "many" fans in number just because they have been that loud/obnoxious.
Interesting, I absolutely loved the art style of Broken Age, it's original and distinctive in a way few games are. I guess you're right about that optimizing for the main stream tends to create relatively boring art.
And it's also why I like the new art style too
I understood it differently, to mean: "All you had to do was make a game with an art style that does not get in the way [of enjoying the game], whatever it might be."
I think this interpretation makes sense, since the original game is primarily story-driven.
Holding up the complaints of someone called "_Proud Retro Fascist_" shouldn't invalidate legitimate criticism just because a broke clock it right twice a day...
> Take, for example, this comment by someone calling themselves Proud Retro Fascist: [...]
For what it's worth, the person called themselves like this in jest: apparently in an earlier comment, someone referred to people skeptical of the new design as "retro fascists." Not sure if that was supposed to be sarcastic as well, since the comment has been deleted.
A maintainer of RubyGems was forcibly removed from the RubyGems GitHub org — which was renamed to Ruby Central — along with every other maintainer. Then access was restored, then revoked again. There was no explanation, no communication, and no understandable reasoning for this.
And still! If there is an "official" statement, I can't find one on https://rubycentral.org/.
This wildly transcends "issues with both internal and external communication" or "we're just a bunch of makers who can't be expected to be good at organization or communication" (to highly paraphrase TFA). This is an absolutely disastrous breach of the community's trust.