Hacker Newsnew | past | comments | ask | show | jobs | submit | michealr's commentslogin

I have worked freelance in similar situations.

Personally, I find it really fun. It's a nice mix of development, design, and organizational understanding.

What I want to do is divide up the project. Usually, these legacy systems don't have clear division points; it's all a big bundle of interdependence. But in my experience, there's usually some less impactful secondary functionality that can be spun off.

That will allow you a few things:

Figure out what you quantify as success for such a project. Its limited scope makes it easier to identify the endpoint. Allow learnings about the legacy system, and perhaps identify what elements you can extract from it—not necessarily code, but in previous work, I've been able to wrap or scrape data in certain areas to provide a sort of external output. Figure out how to work with devs, manage your own time, and educate your organization about what you're trying to do. The third and last point is critical. The failure modes for development are obvious, but the political and design impacts are less so:

1. Lack of experience

2. Poor scope

3. Overly complicated solution

etc.

But the real failure mode is political. You need a developer with some political acumen as well. There's going to be a lot—and I mean a lot—of interviewing people about how exactly subsystem X fits into their workflow. You need the political skill to navigate that, in terms of getting buy-in and quality information.

Downstream of the political dimension, in my experience, is the possible design solution. The actual interviews with people and the regular, constant contact with staff about their job are critical to building something that replaces the existing system but doesn't replicate its design failures.

One mistake you want to avoid is building something too similar to the old solution and missing out on critical information about how the job is actually done.

Also, I'm not currently looking for work—enjoying my current role—but if you want to hit me up, feel free. I can at least impart some experience on what to do.


Remarkable in similar situation, Irish + Polish learnt because of wife and extended family. Found when it comes i18n with the present codebase, I have inherited, it needs a lot of work, but it was immensely helpful to have two backgrounds in particular tricky languages, relative to English.

In quickly scoping out what needs to be worked, and where the inherited setup clearly falls short. Don't think knowing a great deal about other languages is necessary though for the same effect, just enough to smell something might be a bit trickier, like know say the case system in X language shows up in different ways, verb + pronoun order is not predefined.


To counter your counter and to refocus the point on the potential improvements for the NHS. I think the parent is correct with regard to the NHS there are a lot of issues with it. And with meaningful consequences for people's health and well-being, but having followed discussions about it there seems to be an overt focus on comparisons to America and American healthcare costs. The UK and by extension the discussions on issues with the NHS would be better served by comparing and looking at healthcare costs and outcomes of other neighbouring peer countries. Such as the Netherlands or France. Or even further afield in the likes of Singapore.

Note, I'm neither from the UK nor America, and looking in from the outside this aspect of the UK discussion comparing and contrasting with America seems to be the wrong idea.


Would you have a write up in more detail of what you did, even high level. Seems cool thing to do


Unfortunately not, but it's surprisingly straight-forward, apart from the database bit, but here's a bit more detail from memory. There are many ways of doing this and some will depend strongly on which tools you're comfortable with (e.g. nginx vs. haproxy vs. some other reverse proxy is largely down to which one you know best and/or already have in the mix) [Today I might have considered K8s, but this was before that was even a realistic option, but frankly even with K8s I'm not sure -- the setup in question was very simple to maintain]:

* Set up haproxy, nginx or similar as reverse proxy and carefully decide if you can handle retries on failed queries. If you want true zero-downtime migration there's a challenge here in making sure you have a setup that lets you add and remove backends transparently. There are many ways of doing this of various complexity. I've tended to favour using dynamic dns updates for this; in this specific instance we used Hashicorp's Consul to keep dns updated w/services. I've also used ngx_mruby for instances where I needed more complex backend selection (allows writing Ruby code to execute within nginx)

* Set up a VPN (or more depending on your networking setup) between the locations so that the reverse proxy can reach backends in both/all locations, and so that the backends can reach databases both places.

* Replicate the database to the new location.

* Ensure your app has a mechanism for determining which database to use as the master. Just as for the reverse proxy we used Consul to select. All backends would switch on promoting a replica to master.

* Ensure you have a fast method to promote a database replica to a master. You don't want to be in a situation of having to fiddle with this. We had fully automated scripts to do the failover.

* Ensure your app gracefully handles database failure of whatever it thinks the current master is. This is the trickiest bit in some cases, as you either need to make sure updates are idempotent, or you need to make sure updates during the switchover either reliably fail or reliably succeed. In the case I mentioned we were able to safely retry requests, but in many cases it'll be safer to just punt on true zero downtime migration assuming your setup can handle promotion of the new master fast enough (in our case the promotion of the new Postgres master took literally a couple of seconds, during which any failing updates would just translate to some page loads being slow as they retried, but if we hadn't been able to retry it'd have meant a few seconds downtime).

Once you have the new environment running and capable of handling requests (but using the database in the old environment):

* Reduce DNS record TTL.

* Ensure the new backends are added to the reverse proxy. You should start seeing requests flow through the new backends and can verify error rates aren't increasing. This should be quick to undo if you see errors.

* Update DNS to add the new environment reverse proxy. You should start seeing requests hit the new reverse proxy, and some of it should flow through the new backends. Wait to see if any issues.

* Promote the replica in the new location to master and verify everything still works. Ensure whatever replication you need from the new master works. You should now see all database requests hitting the new master.

* Drain connections from the old backends (remove them from the pool, but leave them running until they're not handling any requests). You should now have all traffic past the reverse proxy going via the new environment.

* Update DNS to remove the old environment reverse proxy. Wait for all traffic to stop hitting the old reverse proxy.

* When you're confident everything is fine, you can disable the old environment and bring DNS TTL back up.

The precise sequencing is very much a question of preference - the point is you're just switching over and testing change by change, and through most of them you can go a step back without too much trouble. I tend to prefer ensuring you do changes that are low effort to reverse first. Need to keep in mind that some changes (like DNS) can take some time to propagate.

EDIT: You'll note most of this is basically to treat both sites as one large environment using a VPN to tie them together and ensure you have proper high availability. Once you do, the rest of the migration is basically just failing over.


People get paid hard cash for lower quality plans than you’ve just provided, thanks a lot! :)


> If you want true zero-downtime migration there's a challenge

It is astounding how many people require 24/7 ops... while working 8/5.

Otherwise this comment is an exemplar on how things should be done. My take on this is what OP is a sysadmin, not a dev. *smug smile*


> It is astounding how many people require 24/7 ops... while working 8/5.

In this case the client had an actually global audience. They could have afforded downtime for the actual transition, but it was a usual test for the high availability features that mattered for them.

I do agree with the overall principle, though - a whole lot of people think they need 24/7 and can't afford downtime, yet almost all of them are a lot less important than e.g. my bank, which do not hesitate to shut down their online banking for maintenance now and again. As it turns out, most people can afford downtime as long as it's planned and announced. Convincing management of that is a whole other issue.

> My take on this is what OP is a sysadmin, not a dev. smug smile

Hah. I'd say I was devops before devops was a thing. I started out writing code, but my first startup was an ISP where I was thrown head-first into learning networks (we couldn't afford to pay to have our upstream provider help set up our connection, so I learnt to configure cisco routers while having our provider on the phone and feigning troubleshooting with a lot of "so what do you have on your side?") and sysadmin stuff, and I've oscillated back and forth between operations and development ever since. Way too few developers have experienced the sysadmin side, and it's costing a lot of companies a lot of money to have devs that are increasingly oblivious to hardware and networks.


> It is astounding how many people require 24/7 ops

Yet when us-east-1 goes offline, it's mostly just shrug wait for it to come back because it's not our fault...


Really well done keeping this simple!

It's also another one of those situations where good design principles and coding practices pay off. If the app is a tangled mess of interconnected services, scripts, and cron jobs this kind of transition won't be possible.


Damn, this is why I come to HN. This is awesome, thank you so much for taking the time to write it up.


This was really nice to read. Thanks!


Bump! This sounds very interesting.


Highly recommend WireGuard for this (see kilo for k8s specific that works with whatever network you have setup). Setting up a VPN that just works is super simple.


yep, wireguard is the secret for intercloud, for sure.


Same. Bump!


As others have said, its a grab back of things, with various types of success. Of course no one true silver bullet. The common things of good sleep and excerise always a good idea, regardless. Perhaps even team sports, but personal interests will vary this.

I would say feeling ok with a certain level of personal experimentation, but don't let it neurotically consume you. You have already managed to navigate life to this point. Not everything needs to be changed and not everything needs to be queried.

Trying to dramatically change things can perhaps backfire. Fitting in something related to your existing interests, but with an extroverted forcing function aspect can help.

If you know a technical topic pretty well already, seek to present on it, or teach some intro workshops. Generally, seek to find things that would exercise certain anti shyness muscles.

One thing I found personally helpful was learning a language via immediate focus even at the beginner stages with talking, there are numerous online language learning sites with lessons delivered via video chat. Would say 30 minute lessons initial are ideal.

For me shyness feels like a certain analytical process turned inwards, like I'm DDOS my own brain. The excessive nature is the issue, not necessasrily the mere act of self analysis. Finding activities where I had to moderate that excessive tendency helped for me to recognise the difference

Therapy is always an additional option, but dependent on the person and the needs of course.


Like others I was diagnosed with ADHD as an adult, 31, 10 months ago and I have been taking medication since then. I am also in EU.

Nothing major to add, since there are some good comments from other commentators. Retrospectively I have realised ADHD has been perhaps an advantage in some areas. But additionally an absolute self sabatoge generating machine for the seemingly simplest of things.

Currently near the end my contract, being thinking more about the future. Full stack dev too. If you ever just want to chat and share stories of ridiculous periods procrastination, self doubt and future plans I'm always willing to chat.

I know speaking things out loud for me has lifted the veil of self doubt that can descend upon one after a bout of ill directed attention.


I was diagnosed at 25, about a year ago. I relate to the comment about ADHD being an advantage. I've often thought my brain is well suited to the information age. My home reflects this, I keep a lot of things in plain sight that stimulate my curiosity.

We live in times of unprecedented peace and prosperity, and rather than seeing my wiring as a liability, I see it is a competitive advantage. I may not have all the same strengths, but the ones I have can be very well leveraged.


Can you share what it was like to start medication?


Odd connection perhaps, but this book stuck with me about its framing of group changes. Though it only looks at terrorist organisations.

Essentially how the groups changes its positioning, beliefs and modus in the world over a longer time horizon. Seems to be heavily facilitated by attrition and shedding effects, which changes the units of decision making, the people, to have different personality and character profiles over time.

So how group in time X looks at a situation vs how group in time Y looks at a new situation. Excluding the immediate information on what is happening per each situation. An additional big variable to take into account is how external and internal effects have effected attrition, shedding, and new joiners between those two time periods.

The Terrorist Dilemma https://press.princeton.edu/books/hardcover/9780691157214/th...


Interesting, did Microsoft bring the research any further?


https://dl.acm.org/doi/abs/10.1145/2463676.2467797

Might be what I was thinking of. I'm sure you can find other publications as well. I'm no longer at University and I've lost touch with that professor so I'm not sure their current research.


Analytics wise, I'm ok with being restricted, other commenters have mentioned looking at WASM as a possible workaround. So local does seem to make the most sense, practicality wise

A thought, the possible scope of services in the data notary or data escrow side of things does seem like an underexplored product category.


Any such data notary/escrow company has a pretty good shot of eventually getting breached (they'd naturally be a prime target, since the attackers could get tons of data from tons of people on behalf of tons of different companies), and that'll possibly destroy that company and maybe also your app. There's also the risk they may eventually have rogue employees, etc.


Regular notaries could be as crooked as rogue employees, yet we still use them because imperfect barriers are still barriers (as with security).

But yeah, when computer-related vulnerabilities are thrown into the mix, it could get ugly.


Sure, there's often going to be some centralized source one needs to trust. The issue with a digital escrow vendor is kind of like the issue with cryptocurrency exchanges - one single breach and you immediately walk out with an unfathomably huge treasure trove.

A rogue notary employee can do some damage and notarize things in exchange for bribes, and a rogue bank employee could help siphon some money away, but a rogue digital escrow employee could be bribed to hand over terabytes of extremely sensitive data on lots of big customers, and a rogue cryptocurrency exchange employee could possibly help someone steal hundreds of millions of dollars pretty easily. It's a huge house of cards.


I theory I assume it would, my bottleneck would be just knowledge. Just don't know enough about FHE to comfortably work with it. FHE as a service would be my little mini dream.


There used to be an MIT CISAL research project I find no more traces which had as a project goal to establish FHE as a service.

Apparently it failed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: