Business, tech, and life by a nerd. New every Tuesday: Splitting Light: The Prism of Growth and Discovery.
Share
Splitting Light: Season 2 - Episode 28
Published about 2 months ago • 3 min read
Splitting light
Season 2 Episode 28
Side quests
If you are no longer interested in the newsletter, please unsubscribe
After shipping the hardware to Amsterdam, we quickly launched private and public beta. We were the first product to launch to public beta in November 2018. Database as a service was not far behind.
Image of the Object Storage in public beta! (1)
Théo (a) had instructed the customer success team to forward almost all support tickets to us. We did level 1 (L1) support. Every single issue a customer faced we took care of. We helped the onboarding of customers. We would handback L1 as we grew confident. Until we were comfortable doing level 2 (L2). It was a tradition in the team. We would do many things by hand, for some time, until we were comfortable then we would handover or automate the details.
We took L1, then L2 and lastly L3. That meant at the launch the support was faster because we could answer directly. Afterwards, customer success had more experience and documentation.
We helped customers and internal teams get plugged into object storage. Helping them adjust their code logic and close support if necessary. The team was doing a great job. There were multiple tracks in parallel. The bases of the billing components, the maintenance, the bug fixing and more.
Florian (b) was making good progress on block storage. Because he had prior knowledge of Ceph, he was going fast. By September, he had selected the hardware, done the network and was now testing the performance of the first cluster. We were near private beta on block storage.
I was doing side quests. Some chosen and some imposed.
One of them was trying to build request traces from logs. We had everything in the logs. I thought it would be possible to parse the logs and assemble back traces. I was almost able to do that. But using the API I could not go further. Why not integrate the SDK in the code you ask? Because it was too risky to add code that we didn’t master to a huge codebase we didn’t master either. The next best thing was to parse the logs.
Without being able to add the SDK into the software, the next best thing was to emulate it from logs.
An imposed one, was to inventory every single machine, its rack, and room in the datacenter. I remember that one clearly. I was told you are going to spend a few weeks on that. But I didn’t have time. I bolted together a few hundred lines of python and used several existing discovery mechanisms to populate our machines in three or four days. Then went back to storage team tasks. It was dirty code, almost throwaway but it did the job. I made sure it was accurate before ending the side quest for me.
My job was also to assess if side quests led to benefits or not. We had discussion inside the team whether to use kubernetes to run the cluster. I was pushing back. For me it was a useless burden. Maxim (c) did an interesting move. He forced me to explain in detail why I didn’t want it. I asked first, what was kubernetes used for? Scaling and running services. What did we run the cluster on? A fixed size of servers. We could not scale further than we had servers. What did kube require? An extra set of services just to make it run. The net value for us would be extra work for us. For our context and use case it was not appropriate to use kubernetes.
Using kubernetes adds two components to manage. The purple ones. At no extra benefit.
A storm was brewing in the background. We needed to trigger lightning.
(a) Théotime Rivière, Storage Product Manager then, now Founder of Freedom From Scratch
(b) Florian Florensa, Block Storage Devops then, now Senior Software Engineer at Datadog
(c) Maxime Vaude, Lead DevOps / Product Owner then, now Freelance
Splitting light Season 2 Episode 34 fr-par, you are not cleared for launch If you are no longer interested in the newsletter, please unsubscribe March 2019 After we had sent our first bills to customers, less than a year and a half after the pivot, we were preparing for a new Object Storage region. DC5 had finally come online, our racks of hardware had been installed there. We were doing final adjustments on the installation scripts and deployment steps. However we had one issue. Assembling...
Splitting light Season 2 Episode 33 Over-engineering bandwidth If you are no longer interested in the newsletter, please unsubscribe Late February 2019 The second element was to bill the bandwidth. Specifically the outgoing one. In my original design of the billing component I had thought that we needed a “high performing” database to handle the bandwidth calculations. I had chosen ScyllaDB because friends had good things to say about it. It was a reimplementation of Cassandra but in C++. The...
Splitting light Season 2 Episode 32 Where my money at If you are no longer interested in the newsletter, please unsubscribe End of February 2019 It’s a strange thing to think back to 2019 in 2026. Why? Because of costs. One rack of SIS was several hundred thousand dollars. We joked that each rack was the same price as a small Parisian apartment. Nowadays, a rack that contains a single machine with one GPU can cost the same price or more. Why was it critical that we got money flowing ?...