Thursday, April 9, 2020

Stupid Git Tricks: Interactive Rebase

I like to provide a nice, clean history in my pull requests. Reviewers should be able to follow each commit, and see how the functionality is built up. No extranneous commits. Nothing out of order. Everything looking like one smooth path from idea to implementation.

Unfortunately, my development process doesn't quite work that way. For one thing, I commit (and push) frequently — as in every 10-15 minutes when I'm making lots of changes. For another, I'll often realize that there's a small change that should have been introduced several commits previously. For these, and other reasons, I find git rebase -i invaluable.

OK, some of you are probably outraged: “you're changing history!” Settle down. This is for development branches, not master. And I'm willing to adapt in a team setting: if my team members want to see messy commit histories in a pull request, I'm OK with giving that to them. But only if they squash merges.

So, here are a few of the ways that I change history. You're free to avoid them.

Combining commits

Here's one morning's commit history:

commit 6aefd6989ba7712cb047d661b68d34c888badea4 (HEAD -> dev-writing_log4j2, origin/dev-writing_log4j2)
Author: Me 
Date:   Sun Apr 5 12:13:19 2020 -0400

    checkpoint: content updates


commit e8503f01c72618709ac5231a78cfa8549fcfb7b3
Author: Me 
Date:   Sun Apr 5 09:22:51 2020 -0400

    checkpoint: content updates

commit 8bdb788421c56cb0defe73ce87b9e1ffe4266b0c
Author: Me 
Date:   Sat Apr 4 13:57:27 2020 -0400

    add reference to sample project

Three hours of changes, split up over eight commits, with regular pushes so I wouldn't lose work if my SSD picked today to fail. I really don't want to see all of those in my history.

The solution is to squash those commits down using an interactive rebase:

git rebase -i 8bdb788421c56cb0defe73ce87b9e1ffe4266b0c

When I run this, it starts my editor and shows me the following:

pick e8503f0 checkpoint: content updates
pick f71ddca checkpoint: content updates
pick a8d7a25 checkpoint: content updates
pick 6b87b9b checkpoint: content updates
pick 556a346 checkpoint: content updates
pick 466dd26 checkpoint: content updates
pick 0034657 checkpoint: content updates
pick 6aefd69 checkpoint: content updates

# Rebase 8bdb788..6aefd69 onto 8bdb788 (8 commands)
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
# These lines can be re-ordered; they are executed from top to bottom.
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
# Note that empty commits are commented out

A list of commits, instructions on how to work with them, and a few warnings about what happens if I do something dumb. To squash these commits I update all but the first to be a “fixup”:

pick e8503f0 checkpoint: content updates
f f71ddca checkpoint: content updates
f a8d7a25 checkpoint: content updates
f 6b87b9b checkpoint: content updates
f 556a346 checkpoint: content updates
f 466dd26 checkpoint: content updates
f 0034657 checkpoint: content updates
f 6aefd69 checkpoint: content updates

Save this and exit the editor, and Git applies all of those changes:

Successfully rebased and updated refs/heads/dev-writing_log4j2.

And now when I look at my history, this is what I see:

commit 51f5130422b524603d6249ef40e012aeecde5422 (HEAD -> dev-writing_log4j2)
Author: Me 
Date:   Sun Apr 5 09:22:51 2020 -0400

    checkpoint: content updates

commit 8bdb788421c56cb0defe73ce87b9e1ffe4266b0c
Author: Me 
Date:   Sat Apr 4 13:57:27 2020 -0400

    add reference to sample project

Note that the last commit hash has changed, and my working HEAD no longer refers to the origin branch. This means that I'm going to need to force-push these changes. But before that, there's one more thing that I want to do:

git commit --amend -m "content updates" --reset-author

This command does two things. First, it updates my commit message: this is no longer a “checkpoint” commit. The second thing it does is update the basic commit info, in this case just the timestamp. If you looked closely at the history above, you saw that all of the commits had been marked with the timestamp of the first; --reset-author makes the history more closely reflect what actually happened (it can also be used to pretend that other people didn't contribute to the commit, but I'll assume you're more honorable than that).

Now the log looks like this:

commit fdef5d6f0a19218784b87a596322816347db2232 (HEAD -> dev-writing_log4j2)
Author: Me 
Date:   Sun Apr 5 12:22:46 2020 -0400

    content updates

commit 8bdb788421c56cb0defe73ce87b9e1ffe4266b0c
Author: Me 
Date:   Sat Apr 4 13:57:27 2020 -0400

    add reference to sample project

Which is what I want to see, so time to force-push and overwrite the previous chain of commits:

> git push -f
Counting objects: 4, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 4.52 KiB | 2.26 MiB/s, done.
Total 4 (delta 3), reused 0 (delta 0)
To ssh://
 + 6aefd69...fdef5d6 dev-writing_log4j2 -> dev-writing_log4j2 (forced update)

I should note here that the previous chain of commits still exists in your repository. If, for some reason, you want to retrieve them, you can explicitly check-out the former head commit:

git checkout -b recovery 6aefd6989ba7712cb047d661b68d34c888badea4

Of course, if you close your terminal window, you might not find that commit hash again, so if you're worried you should write it down it somewhere. When I'm making a large set of changes, I'll often create a temporary branch from the one that's being rebased, just in case (unfortunately, I often forget to switch back to the branch that I want to rebase).

Re-ordering commits

Especially at the start of a new project, I might jump around and edit several different things, resulting in a messy commit history:

pick fc46b5b update docs
pick 0f734fb minor change to feature X
pick 2233c01 update docs
pick fe56f59 another change to feature X
pick d3fb025 related change to feature Y
pick aec87c1 update docs
pick 66ef266 something unrelated to either X or Y
pick 96179b3 changing Y
pick 904a779 update docs

Interactive rebase allows you to move commits around, and then optionally squash those moved commits:

pick fc46b5b update docs
f 2233c01 update docs
f aec87c1 update docs
f 904a779 update docs
pick 0f734fb minor change to feature X
f fe56f59 another change to feature X
pick d3fb025 related change to feature Y
f 96179b3 changing Y
pick 66ef266 something unrelated to either X or Y

There are a couple of gotchas when you do this. First, you need to make sure that you're not changing both X and Y in the same commit. If you do, you can still squash the commits together, but it's pointless to try to track the work in each feature separately.

Second, make sure that you preserve order: in my example, commit 0f734fb happened before fe56f59 the interactive rebase needs to keep them in this order. If you don't, you can end up with merge conflicts that are challenging to resolve.

Lastly, and most important, make sure you have the same number of commits that you started with. If you accidentally delete a commit rather than move it, you will lose that work. For this reason, I tend to use interactive rebase on small pieces of my history, perhaps making several passes over it.

Editing commits

When writing my article about Log4J2 appenders, I saw a comment that I wanted to change in the accompanying example code. Unfortunately, it wasn't the HEAD commit:

commit 8007214ef232abf528baf2968162b51dcd2c09ca
Author: Me 
Date:   Sat Apr 4 09:34:53 2020 -0400


commit 38c610db6a02747d7017dff0a9c2b7ed290e30e1
Author: Me 
Date:   Sat Apr 4 08:34:12 2020 -0400

    stage-10: add tests

commit 5dfd79e3f879038e915fa04c83f8eb9b0f695e35
Author: Me 
Date:   Tue Mar 31 08:38:17 2020 -0400

    stage-9: implement a lookup

There are two ways that I could have approached this. The first would be to create a new commit and then reorder it and turn it into a fixup. The second is to edit the file as part of an interactive rebase, by marking the commit with an "e":

pick 5dfd79e stage-9: implement a lookup
e 38c610d stage-10: add tests
pick 8007214 README

When I do this, git works through the commits, and stops when it reaches the marked one:

Stopped at 38c610d...  stage-10: add tests
You can amend the commit now, with

  git commit --amend 

Once you are satisfied with your changes, run

  git rebase --continue

I can now edit any files in my working tree (they don't have to be part of the original commit). Once I'm done, I do a git add for changed files, followed by both git commit --amend and git rebase --continue. After the rebase completes, I can force-push the changes.

Beware that editing commits can introduce merge conflicts: if a later commit touches the same code, you'll have to stop and resolve the conflict. This is more likely when you edit early commits, or when the edits are wide-ranging. It is far less likely for changes like comments.

Cherry-picking into the middle of a branch

You may be familiar with git cherry-pick, which takes an arbitrary commit and puts it at the HEAD of your current branch. This can be useful when two teams are working on the same general area of the codebase: often one team will incidentally do something that the other team finds valuable.

Interactive rebase is like cherry-picking on steroids: you can insert a commit anywhere in your commit tree. To be honest, I find this more risky than beneficial; instead I would cherry-pick to HEAD and then perhaps use an interactive rebase to move the commit to where it “belongs.” But in the interest of “stupid git tricks,” here we go.

Let's say that you've been working on a branch and have been making changes, starting with changes to the build scripts. Then you talk with a colleague, and learn that she has also made changes to the build. You could cherry-pick her change to the end of your branch and use it moving forward, but you're somewhat OCD and want to keep the build changes together. So you fire up git rebase -i and add your colleague's commit as a new “pick”:

pick ffc954d build scripts
p 1438a13d11d6001de876a034f434a050c09b587d
pick b497403 update 1
pick 18e8415 update 2
pick 33a4e9d update 3

Now when you do a git log, you see something like this:

... skipping two commits

commit 7b62acb8d9100f379a0d43e3227c36ae91c1edd9
Author: Me 
Date:   Fri Mar 27 10:11:01 2020 -0400

    update 1

commit c579ed88403354faed83213da63d4546c5aa13b5
Author: Someone Else 
Date:   Sun Jan 5 09:30:14 2020 -0500

    some build changes

commit ffc954dc41555282ece3e2b7a0197472c0af9f11
Author: Me 
Date:   Mon Jan 6 08:02:30 2020 -0500

    build scripts

Note that the commit hash has changed: from 1438a13d to c579ed88. This is because it's now part of a new branch: a commit hash is based not merely on the content of the commit, but also the commit chain that it's a part of. However, the committer's name and the commit date are unchanged.

Wrapping up: a plea for clean commit histories

A standard Git merge, by preserving the chains of commits that led to the merge point, is both incredibly useful and incredibly annoying. It's useful, in that you can move along the original branch to understand the context of a commit. It's annoying, in that your first view of the log shows the commits intermingled and ordered by date, completely removing context.

I find messy histories to be similar. Software development doesn't happen in a clean, orderly fashion: developers often attach a problem from multiple sides at once. And that can result in commit histories that jump around: instead of “fix foo” you have “add test for bar”, followed by “make test work”, followed by an endless string of “no, really, this time everything works”.

Maybe you find that informative. If not, do your future self (and your coworkers) a favor and clean it up before making it part of the permanent record.

Monday, March 30, 2020

S3 Troubleshooting: when 403 is really 404

It's generally a bad idea to click on links in strange websites, but this one is key to this post. If you were to click this link, you'd see a response like the following, and more importantly, the HTTP response code would be a 403 (Forbidden).

  <Message>Access Denied</Message>

However, that bucket is open to the world: AWS reminds me of this by flagging it as “Public” in bucket listings, and putting a little orange icon on the “Permissions” tab when I look at its properties. And if you click this similar link you'll get a perfectly normal test file.

The difference between the two links, of course, is that the former file doesn't exist in my bucket. But why isn't S3 giving me a 404 (Not Found) error? When I look at the list of S3 error responses I see that there's a NoSuchKey response — indeed, it's the example at the head of that page.

As it turns out, the reason for the 403 is called out in the S3 GetObject API documentation:

  • If you have the s3:ListBucket permission on the bucket, Amazon S3 will return an HTTP status code 404 ("no such key") error.
  • If you don’t have the s3:ListBucket permission, Amazon S3 will return an HTTP status code 403 ("access denied") error.

While my bucket allows public read access to its objects via an access policy, that policy follows the principle of least privilege and only grants s3:GetObject. As do almost all of the IAM policies that I write for things like Lambdas.

Which brings me to the reason for writing this post: Friday afternoon I was puzzling over an error with one of my Lambdas: I was creating dummy files to load-test an S3-triggered Lambda, and it was consistently failing with a 403 error. The files were in the source bucket, and the Lambda had s3:GetObject permission.

I had to literally sleep on the error before realizing what happened. I was generating filenames using the Linux date command, which would produce output like 2020-03-27T15:43:31-04:00. However, S3 notifications url-encode the object key, so the key in my events looked like this: 2020-03-27T15%3A43%3A31-04%3A00-c. Which, when passed back to S3, didn't refer to any object in the bucket. But because my Lambda didn't have s3:ListObjects I was getting the 403 rather than a 404.

So, to summarize the lessons from my experience:

  1. Always url-decode the keys in an S3 notification event. How you do this depends on your language; for Python use the unquote_plus() function from the urllib.parse module.
  2. If you see a 403 error, check your permissions first, but also look to see if the file you're trying to retrieve actually exists.

You'll note that I didn't say “grant s3:ListObjects in your IAM policies.” The principle of least privilege still applies.

Monday, January 6, 2020

The Future of Open Source

The world of open source software seems to be going through a period of soul-searching. On the one hand, individual maintainers have retracted packages, causing disruption for the communities that depended on those packages. On the other, software-as-a-service providers are making more money from some applications than their creators.

This is all happening in a world where businesses depend on open-source to operate. It doesn't matter whether you're an individual launching a startup with PHP and MySQL, or a multi-national replacing your mainframe with a fleet of Linux boxes running Java. Your business depends on the work of people that have their own motivations, and those motivations may not align with yours. I think this is an untenable situation, one that will eventually resolve by changing the nature of open-source.

Before looking at how I think it will resolve, I want to give some historical perspective. This is one person's view; you may not agree with it.

I date the beginning of “professional” open source as March 1985: that was the month that Dr Dobbs published an article by Richard Stallman, an article that would turn into the GNU Manifesto. There was plenty of freely available software published prior to that time; my experience was with the Digital Equipment Corporation User Society (DECUS), which published an annual catalog of programs ranging in complexity from fast fourier transform routines to complete language implementations. These came with source code and no copyright attached (or, at least, no registered copyright, which was an important distinction in the 1970s and early 1980s).

What was different about the GNU Manifesto, and why I refer to it as the start of “professional” open source, was that Stallman set out a vision of how programmers could make money when they gave away their software. In his view, companies would get software for free but then hire programmers to maintain and enhance it.

In 1989, Stallman backed up the ideas of the GNU Manifesto with the Gnu Public License (GPL), which was applied to the software produced by the GNU project. This licence introduced the idea of “copyleft”: a requirement that any “derivative works” also be licensed using the GPL, meaning that software developers could not restrict access to their code. Even though that requirement was relaxed in 1991 with the “library” (now “lesser”) license, meaning that you could use the GNU C compiler to compile your programs without them becoming open source by that act, the GPL scared most corporations away from any use of the GNU tools (as late as 1999, I was met with a look of shock when I suggested that the GNU C compiler could make our multi-platform application easier to manage).

In my opinion, it was the Apache web server, introduced in 1995, that made open-source palatable (or at least acceptable) to the corporate world. In large part, this was due to the Apache license, which essentially said “do what you want, but don't blame us if anything goes wrong.” But also, I think it was because the corporate world was completely unprepared for the web. To give a sense of how quickly things moved: in 1989 I helped set up the DNS infrastructure for a major division of one of the world's largest corporations; I had only a few years of experience with TCP/IP networking, but it was more than the IT team. NCSA Mosaic appeared four years later, and within a year or two after that companies were scrambling to create a web presence. Much like the introduction of PCs ten years earlier, this happened outside of corporate IT; while there were commercial web-servers (including Microsoft and Netscape), “free as in beer” was a strong incentive.

Linux, of course, was a thing in the late 1990s, but in my experience wasn't used outside of a hobbyist community; corporations that wanted UNIX used a commercial distribution. In my view, Linux became popular due to two things: first, Eric Raymond published The Cathedral and the Bazaar in 1997, which made the case that open source was actually better than commercial systems: it has to be good to survive. But also, after the dot-com crash, “free as in beer” became a selling point, especially to the startups that would create “Web 2.0”

Jumping forward 20 years, open-source software is firmly embedded in the corporate world. While I'm an oddity for running Linux on the desktop, all of the companies I've worked with in the last ten or more years used it for their production deployments. And not just Linux; the most popular database systems are open source, as are the tools to provision and manage servers, and even productivity tools such as LibreOffice. And for most of the users of these tools, “free as in beer” is an important consideration.

But stability is (or should be) another important consideration, and I think that many open-source consumers have been lulled into a false sense of stability. The large projects, such as GNU and Apache, have their own repositories and aren't going anywhere. And the early “public” repositories, such as SourceForge and Maven Central, adopted a policy that “once you publish something here, it will never go away.” But newer repositories don't have such a policy, and as we saw with left-pad in 2016 and chef-sugar in 2019, authors are willing and able to pull their work down.

At the same time, companies such as Mongo and Elastic.NV found that releasing their core products as open-source might not have been such a great idea. Software-as-a-service companies such as AWS are able to take those products and host them as a paid service, often making more money from the hosting than the original companies do from the services they offer around the product. And in response, the product companies have changed the license on their software, attempting to cut off that usage (or at least capture a share of it).

Looking at both behaviors, I can't help but think that one of the core tenets of the GNU manifesto has been forgotten: that the developers of open-source software do not have the right to control its use. Indeed, the Manifesto is quite clear on this point: “[programmers] deserve to be punished if they restrict the use of these programs.”

You may or may not agree with that idea. I personally believe that creators have the right to decide how their work is used. But I also believe that releasing your work under an open-source license is a commitment, one that can't be retracted.

Regardless of any philosophical view on the matter, I think there are two practical outcomes.

The first is that companies — or development teams — that depend on open-source software need to ensure their continued access to that software. Nearly eight years ago I wrote about using a local repository server when working with Maven and Java. At the time I was focused on coordination between different development teams at the same company. If I were to rewrite the post today, it would focus on using the local server to ensure that you always have access to your dependencies.

A second, and less happy change, is that I think open-source creators will lose the ability to control their work. One way this will happen is for companies whose products are dependent on open-source to provide their own public repositories — indeed, I'm rather amazed that Chef doesn't offer such a repository (although perhaps they're gun-shy after the reaction to their hamfisted attempt to redistributed chef-sugar).

The other way this will happen is for service-provider companies to fork open-source projects and maintain their own versions. Amazon has already done this, for Elasticsearch and also OpenJDK; I don't expect them to be the only company to do so. While these actions may damage the companies' reputations within a small community of open-source enthusiasts, the much larger community of their clients will applaud those actions. I can't imagine there are many development teams that will say “we're going to self-host Elasticsearch as an act of solidarity”; convenience will always win out.

If you're like me, a person responsible for a few niche open-source projects, this probably won't matter: nobody's going to care about your library (although note that both left-pad and chef-sugar at least started out single-maintainer niche projects). But if you're a company that is planning to release your core product as open-source, you should think long and hard about why you want to do this, and whether your plan to make money is viable. And remember these words from the GNU Manifesto: “programming will not be as lucrative on the new basis as it is now.”