Defining success – How CodeGuard gives back to the developer community

Twitter Facebook
How do you define the success of your company? If you were asked to “define your company”, what would your answer be?
There are many ways that a company can be measured. For example, one could ask “How many customers do we have? How happy are our customers? How much have we grown in the past year?” There’s no doubt that these are important business questions to ask at some point, and questions that should have answers. I think that there are more factors to be considered though.
So how else can a company be defined? Why, by their culture of course! These days companies love to tout about the quirky, hip, laid-back and fun culture that their company has. Everyone wants to be the ‘cool’ company on the block that has a dog-friendly office and a ping-pong table in the back room.
indeassssx
To be clear, companies with this kind of culture are not bad! It’s great to see so many companies out there trying to create stress-free and happy environments for the people who work there. So then, is a company defined by their culture? I think we’re almost there, but there’s still one more metric to look at. Once you have happy customers we here at CodeGuard think you’re a third of the way there. Once you have an established company culture with happy employees we think you’re two-thirds of the way there.
This is where giving back to the community comes into play for CodeGuard. By having happy customers, happy employees, and a happy community around us, we here at CodeGuard believe we have defined success.

How CodeGuard gives back

Atlanta Meetups

Not only do we strive for happy customers who rave about our product, or happy employees that get to drink beer and play corn-hole with each other on Fridays, we also try to regularly give back to the community around us. One of the ways that we do this is by hosting the monthly Atlanta Go Meetup and Atlanta Docker Meetup events at the CodeGuard office.
highres_421067402
highres_421050372
We occasionally sponsor food and beer at other meetups in town as well, including these two, but we feel that offering up our space as a place to meet is a great way to give back to the developer community in a larger way. One of CodeGuard’s senior developers helps to organize the Atlanta Go Meetup, and more than one of our developers have given talks as a presenter.
Our whole team enjoys attending these events though. It gives us the opportunity to meet and learn from some of the other talented developers in our area. We also like knowing that at least once a month (sometimes more) we can provide a fun and relaxing space for other developers to learn and network as well. Take a look at the Atlanta Go Meetup page or Atlanta Docker Meetup page if you’d like to come to an event!
highres_363334542
highres_421067602

Open-Source projects

Hosting meetups is not the only way we try to give back to the community. Because many parts of our application rely on open-source technology, we recognize the importance of trying to get involved with that community as well. That is why we have open-sourced a number of projects at CodeGuard; to help other developers and businesses solve technology problems that we once had to face. Last year we built and open-sourced a project called S3gof3r – a tool that provides fast, parallelized, pipelined streaming access to Amazon S3.
Some of our other projects include Trailblazer (a route table maintenance utility for VPC subnets that want most Internet traffic behind a NAT) and git-tail (a tool for shrinking git repositories by truncating history). We have a few other projects that are open source, but these three seem to be the most popular to date!
We hope to make more open-source contributions over time, but we are also proud of the involvement we’ve had with that community so far. As for myself, I had the pleasure of speaking at the All Things Open conference in Raleigh last year, a conference all about open-source technology. It was my own personal and fun way of getting involved and giving back to the community!

Sponsoring Conferences

Besides hosting meetups for developers, and releasing open-source technology, we also sponsor several conferences throughout the year as a way to give back. Our two favorites from the past year have been HostingCon and WordCamp ATL.
 
ts
We always have a lot of fun at the WordCamp conference because of how passionate everyone in the WordPress community is. This conference organizes a whole suite of educational lectures for people just getting started with WordPress, and also provides a fun atmosphere for developers and designers to network and meet new people. This past year we only wanted to be a positive presence for attendees; we did this by passing out some of our highly coveted t-shirts for free and by raffling off a Macbook Air. As a sponsor we were proud to invest in the event so that everyone involved would have a more enjoyable and memorable experience.
t
Candy-apple red 458 Italia Ferrari’s are also enjoyable. We had fun this past year raffling off rides in one at a conference as well!
tew

Moving Forward

We have big plans this year to continue sponsoring WordCamps around the nation, as well as other conferences. We’re a big fan of the WordPress community, and would love to keep giving back to it!
Screen Shot 2015-01-21 at 10.29.49 AM
So if someone were to ask us, “How do you define the success of your company?” We would say we are defined by our happy customers, our happy employees, and a happy community around us. With a new year ahead of us we have many plans to keep giving back, and keep making people happy! Stay tuned to our blog throughout the year for future updates.
How does your company give back? We’d love to know! Feel free to leave a comment below or send us a tweet.
Natalie

Infrastructure Update – Backup IP addresses available

Twitter Facebook

We’ve talked in the past about how we’ve leveraged Amazon’s compute services to build a scalable and efficient architecture to serve our customers. The biggest downside to this configuration from an infrastructure standpoint was that each server had it’s own public address. That may not sound like a big deal, but when we’re starting and stopping hundreds of servers per day, it makes it difficult for us to tell customers with certainty what IP addresses would be used to perform their website backups.

Many hosting providers recommend or even require that servers initiating incoming SFTP, FTP, MySQL or SSH connections be added to a whitelist or firewall rule. This meant that there were some customers that we could not serve and some that, understandably, did not want to allow all traffic from Amazon’s public cloud. Today that all changes!

(Actually, we’ve been slowly transitioning and load-testing the new infrastructure components over the last several weeks, but today we’re ready to share it with everyone!)

What you need to know

Here is what you need to know if you would like to add our IP addresses to your firewall or otherwise whitelist connections from our service. All of our outbound connections are now originating from these IP addresses:

184.72.217.227
184.72.217.230
184.72.219.90
54.174.115.171
54.174.153.212
54.174.91.34
54.235.150.113
54.236.233.28
54.236.233.46

We will reduce this list over the next few months, but we want to provide a transition period for those customers that may need to update their existing MySQL whitelist or firewall configurations. The most up-to-date list of IP addresses will be maintained on our Support Center if you need to reference them in the future. That’s all you need to know, but if you’re interested in the technical details, read on!

Behind the curtain

It’s no secret that we use Amazon Web Services to power CodeGuard. We’re big fans of their products and continue to be impressed with the pace of development that they maintain. What we set out to do is move our backup servers from the public EC2 cloud into a Virtual Private Cloud (VPC). This would allow us to concentrate our backup resources behind a single gateway server that provides Network Address Translation (NAT). This NAT instance can be assigned a public IP address, so all outgoing traffic from our backup servers to our customers would originate from a single IP address. This approach would give us the desired external IP control while still allowing us to spread our workload across many servers.

Unfortunately, executing this approach was not as straightforward as we had hoped, primarily due to our use of Amazon’s S3 service for backup storage. There are no special accommodations made within the VPC to access S3, which means that all S3 traffic would also go through our NAT. We move several terabytes of data in and out of Amazon’s S3 service every day and our testing showed that with the S3 traffic going through our NAT, we would have a maximum server-to-NAT ratio of about 5:1. That’s 5 servers processing backups for each NAT. More than that and we saturate the NAT’s network connection and start dropping traffic. With our current scale, that would mean at least 30 NAT servers at peak and, consequently, 30+ IP addresses for our customers to manage. Needless to say, we didn’t like that solution – it was expensive, inefficient and very complex to automatically scale.

EC2_Management_Console

Network throughput from one of our NAT instances.

Fortunately, in late November, Amazon’s relentless development machine quietly rolled out a small bit of functionality that allowed us to change directions with this project. What they provided was a single API that had an up-to-date list of IP ranges for Amazon’s services. From this, we could determine which IP ranges contain our S3 endpoints and subsequently our backup servers could be configured to route traffic bound for S3 directly to it rather than through the NAT. With this arrangement we were able to achieve a server-to-NAT ratio of more than 200:1. That gives us a comfortable margin for our current workload and plenty of room to continue growing.

This is functionality that’s been on our list for a long time now and we’re happy to cross it off. For those of you that have been waiting, we appreciate your patience and hope that this provides some added insight to our approach.

– Jonathan

New Feature: Backup Scheduling

Twitter Facebook

The ability to schedule the time that a particular backup runs is something that we’ve been happy to help customers with through our support team, but until today we had not exposed this functionality in our dashboard. Why the long beta period?

The short answer is: scale. This is a class of problems that is easy to solve when there are not many operations competing for resources, but becomes much more difficult as the number of operations increase. Imagine a simple case where a single backup has to run one time per day and there is a dedicated server for this task. In this contrived scenario, the backup could be scheduled to run at any time, and since the server it’s running on is idle all day, you can almost guarantee that the backup operation will start at the scheduled time and complete successfully. In a more real-world scenario, imagine a server that has a backup scheduled to run at a particular time, but this server is also hosting an active website and a database. The backup could be scheduled to run at any time, but the certainty that the operation will start as scheduled and complete successfully diminishes.

Screen_Shot_2014-12-05_at_12.23.10_PM

In our case, we’re running anywhere between 10 and 150 servers to service the more than 200,000 backups we perform on a daily basis. So, how do you solve the challenge of scheduling while maintaining CodeGuard-levels of reliability? We make sure that we have the spare capacity available to run the backups at the times they are scheduled using an infrastructure management service that we’ve developed internally. This service, which we affectionately call Steward, watches after all of our servers, the backup operations running on them and the queue of pending backup and restore operations. When the need arises to add capacity to handle the upcoming load, Steward will start more servers to accommodate. Similarly, as backups finish and the servers become idle, Steward shuts them down. This arrangement allows us to have just-in-time resources available for all of our scheduled, on-demand and unscheduled backup and restore operations. Steward has also helped with server configuration management, versioning, deployment, fault-tolerance and cost reduction, but those are topics for another post!

The excellent everytimezone.com

The excellent everytimezone.com

In addition to the back-end, infrastructure changes, we have also updated our dashboard to reflect the new scheduling ability. Not only do we have an option for it on the website backup settings page, but we’ve updated all of the times and reporting functionality to accurately reflect the backup times in the customer’s local time zone. If you’ve ever tried to schedule a meeting with a colleague or customer in a different time zone, you know that this is not as easy as it sounds! We wanted our interface to be very clear about the scheduling time to ensure that there is no confusion for customers, regardless of what part of the planet they happen to be on relative to our servers.

header_image-2

A quick note about database backups – currently, database backups for a particular website will run at the same time as the website backup. For backup consistency, especially for database-backed applications like WordPress, Joomla! and Drupal it’s important that the backups of the database and file content are taken in close proximity to each other. We have worked very hard to ensure that our website file backup and database backup processes impose minimal load on your server and, therefore, running applications. If you are concerned about load or are using legacy MySQL storage engines, you can always schedule your website and database backups to occur at an off-peak time.

Ready to give backup scheduling a try? Check out the article in our support center for detailed instructions.

– Jonathan

Selective Restore: Our final restore improvements

Twitter Facebook
Over the last few months we’ve released a number of dramatic improvements to all of our restore features. First, we made a number of enhancements to the navigation and user interface of our Automatic One-Click Restore. When you request an Automatic Restore we now present you with a list of your databases for that website in case you want to restore a database at the same time. For WordPress websites, we now detect which of your databases is your WordPress database and make sure you know to restore it alongside your WordPress website.

 

Screen Shot 2014-12-03 at 11.39.47 AM

 

The next set of improvements we worked on re-vamped our user interface for requesting zipped backups. We also improved the speed of our zip process, and increased the time period that you have to access backups once you have requested one. When we improved our Download Zip feature we also introduced the concept of ‘Selective Download.’  Now when you choose to download a backup you can download the entire backup or choose to download select files and folders from the backup.

 

Screen Shot 2014-12-03 at 11.41.17 AM

 

To conclude our restore enhancements, we have completely re-imagined our “Individual File Restore” feature. Modeled after the same interface for requesting a zip, we now call this feature Selective Restore. You can navigate through a tree-like structure of your backups and check which files and folders you specifically want to restore. Before the restore begins, you are also presented with the option of a confirmation page to see what you selected before beginning this process. We hope that the time it takes for you to search for that one file you need to restore is now cut in half.

 

Screen Shot 2014-12-03 at 11.42.31 AM

 

Screen Shot 2014-12-03 at 11.43.27 AM

 

Screen Shot 2014-12-03 at 11.51.19 AM

 

With these final improvements, we have now overhauled, redesigned, and re-engineered all three of our restore features. Automatic Restore, Download Zip, and Individual File Restore (Selective Restore). As a customer, what does this mean for you?

 

When the time comes and you need to restore your website you should notice that for any option you choose, it is much easier now to figure out which option is right for you. We spent a lot of time thinking about the navigation of this part of our application. Figuring out what restore type you need, and what backup version you need is now much simpler with our navigation improvements. To automatically restore your databases alongside your website it is now as easy as checking a box. When you choose to perform an Automatic Restore we now show you a list of all of your databases. You can choose to restore none of them, all of them, or some of them.

 

Screen Shot 2014-12-03 at 11.49.08 AM

 

What else will you be able to notice? Speed. We put a lot of work into the back-end code that powers all three of these features to make them more robust and much faster. Each website in our system can have hundreds of daily backups and each backup can contain hundreds of thousands or even millions of files! Getting the metadata for a single folder from a single backup so that we can populate this friendly, new interface for selective download and selective restore can be a time intensive and computationally expensive process. This data is generally used in the background by our powerful backup servers, not our smaller web servers. So, in order to avoid overwhelming our web servers or making our customers wait for an extended period of time, we had to get creative. Each backup version has a unique archive of metadata that is stored with gzip and AES-256 encryption in Amazon’s S3 service. To retrieve it as efficiently as possible we created a pipeline to simultaneously transfer, decrypt, decompress and serve the requested information as JSON to the browser. Each interaction by the customer initiates an AJAX request to the pipeline service so that the data is loaded lazily as needed and then the data is cached in the browser to avoid duplicate requests. We wanted this functionality to feel fast for customers and this process allows us to quickly identify and transfer the the proverbial needle in a haystack in a way that intelligently uses our existing resources rather than requiring us to add capacity.  If something happens to your website rest assured that your content will be restored quicker than ever, giving you that peace of mind that everything is OK much sooner as well.

 

After performing a restore with CodeGuard we hope you’ll be able to notice the changes that we made. Right now after a restore completes we send you an email letting you know it finished. We now also send an email asking if you want to give us feedback about our restore process. If you feel like sharing your experience with us, please feel free to answer our brief survey when your restore completes! While we are certain we made many improvements to these features, we are still always interested in improving them further. If you think of anything you would have liked to be different, let us know!

 

In the meantime, we hope you enjoy this final restore improvement. Whether you need to restore your entire website, your database, multiple databases, request an entire zip of your website, request a zip of only certain files and folders, or automatically restore select files and folders, CodeGuard has you covered!

 

Natalie

 

A Day in the Life of a CodeGuardian

Twitter Facebook

At CodeGuard, we’ve been able to push a high quality product to market quickly and scale it reliably. Today, we would like to share what happens on a daily basis to make this type of rapid production possible.

Daily Routine

Everyone in the office has different schedules and different priorities. While each team member’s daily routine is different and changes from day to day, here’s what a typical day at CodeGuard for one of our employees might look like.

8:00 Wake up, eat, and get ready for work
9:00 Arrive at work, respond to emails, and plan for the day
9:30 Begin testing new selective restore features
10:00 Gather with the rest of the team for our daily scrum meeting
10:15 Modify the selective restore pipeline to address an unhandled case
10:45 Write an article for our weekly blog post
12:00 Review and implement suggestions made by another team member
12:30 Eat lunch
1:00 More testing for the new selective restore features
1:30 Monthly meeting with supervisor
2:00 Final round of testing for the new selective restore features
3:00 Deploy new selective restore features
3:15 Final testing for the new internal performance metrics dashboard
4:00 Deploy new internal dashboard
4:15 Celebrate two successful deploys by ringing our ceremonial CodeGuard gong
4:30 Leave early and enjoy the holiday weekend

_D1A0488_v1fs

Development Cycle

Now that you’ve seen what a typical day at CodeGuard looks like, let’s talk about our development process.

Our day begins with our daily morning scrum meeting. For those unfamiliar with Agile development practices a scrum meeting is a way for members of a team to share updates regarding ongoing projects. In our scrum meetings we form a circle and have each member share a high level overview of the tasks they completed yesterday, the tasks they plan to complete today and the tasks that need help with in order to move forward. These meetings generally last 10-15 minutes. The rest of the day involves completing the tasks covered during our morning scrum. Typical tasks include planning for new projects, developing current projects, or testing completed projects.

Planning

Planning begins at our conference table with each project participant expressing their own thoughts and concerns about project execution. Because our team is small, there is no designated project leader. Instead, one team member is asked to research the project before hand and help guide the discussion. Team members are encouraged to contribute, and sitting idly is not an option. One of the great things about working for a small company is that we have the freedom to try new things. For example, although our main application is written in Ruby, a few of our most recent projects have been written in Go. This level of flexibility would be hard to find at a larger company.

Development

Once we have a plan, we break the project apart into manageable tasks and begin development. We are lucky enough to have access to top of the line tools for writing software. Each employee has their own workstation consisting of an adjustable height desk, 27 inch retina display, and the latest Macbook Pro. Although each team member is free to choose their own development environment, most of gravitate towards command line tools like vim and tmux. We use GitHub issues to keep track of what is being worked on and by whom.

Testing

After a task is completed, it must be submitted for review. Depending on the importance of the task, one or several other team members will test and review changes in our staging environment before submitting the final changes to our production servers. In addition to automated testing, the review process often involves analyzing code for uncaught syntactic and logical errors.

Celebrate

Although we spend most of our time working hard to improve our service, being a CodeGuardian isn’t all work and no play. To keep things lighthearted and fun, we have Happy Hour every Friday afternoon, and we routinely plan outings to nearby events. Our most recent adventure involved participating as extras in a big movie production being filmed in downtown Atlanta.

 

– Taylor