Thursday, November 29, 2012

How To Track A Subversion Branch In A Git Repository

The Problem:

For whatever reason, your project is using Subversion for version control and you're using Git to do your work. You need to track a branch from Subversion in your Git repository.

The Solutions:

I searched around for the solution to this problem and found many a circuitous web page describing how to do this. Here's your simple, step-by-step way of solving this problem.

Step 1: Add The Subversion Branch To Your Git Config

Edit $REPO/.git/config, adding the following:

  [svn-remote "svn-thebranch"]
          url = http://svn.yourrepository.com/yourproject/branches/thebranch/
          fetch = :refs/remotes/thebranch

Where thebranch is the name of the branch you're wanting to track.

Step 2: Update The Subversion Data

To do this, run:

  git svn fetch --fetch-all

and go get a drink. Or three, depending on how much data the Subversion repository is carrying. For my project this takes about 10-15 minutes easily.

Step 3: Check Out The Subversion Branch Locally

Simply type:

  git checkout -b thebranch remotes/thebranch

Now you've got a local instance which is not tracking the branch in Subversion. You should now be sitting in a local copy of the branch.

Step 4: Push The Branch To Your Git Repository

Now push this branch up to your Git repository with:

  git push origin HEAD:my-copy-of-thebranch [--set-upstream]

If you include the --set-upstream argument then your local branch will be setup to track changes to the branch you've just created remotely.

That's it!

Monday, October 29, 2012

Beta Testing Steam On Linux

Valve is taking applications from experience Linux users to beta test their Steam Engine port to Linux!

Yes, that's right. Ported to Linux!

You can sign up for the beta here. There's no guarantee you'll be selected, but you definitely won't if you don't try!

Valve Linux Steam Client Beta Application

We're looking for Linux gamers to install and test our new Steam for Linux client. We are primarily interested in experienced Linux users. 
In order to take the survey, you need to first login with your Steam account to link your response with your Steam ID.

Monday, October 8, 2012

C, Perl, Swig And Off64_t

On my project at work (called Proton) I've been working on dynamic language bindings, specifically Perl and Ruby. I had previously done the same language bindings for Qpid and so expected the same work would produce the same results in the new code base.

(I think all stories of frustration and despair start out this way, don't they?)

I ran into a stone wall, though, while doing the Perl bindings. Ruby fell into place correctly and I expected the same result with Perl. But I was surprised when, after setting up the Swig file and putting the CMake files into place that I got the error message:

error: unknown type name ‘off64_t’

when I tried to build the Perl bindings. The error wasn't in our code, but referred to code within the Perl distribution itself. After a fruitless day of searching for an answer, and only finding where others had reported similar problems, I felt stumped.

That is, until I hopped into the #perl channel on Freenode and asked other Perl developers there for help. And one guy, named mucker, came to the rescue. So, for anybody who's experience this problem and who has been frustrated trying to find a solution, here you go!

What you need to do is provide the same C compiler flags to your build that were used to build the Perl interpreter itself. To get the flags from the command line, you just do:

perl -MConfig -e "print \$Config{ccflags}"

which will output something like this:

mcpierce@mcpierce-laptop:~ $ perl -MConfig -e "print \$Config{ccflags}"
-D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64

Assign that out to an environment variable and pass it into your build environment and your build will work.

And if you're using CMake as we do, you can add the following snippet to your Perl bindings section:
execute_process(COMMAND perl -MConfig -e "print \$Config{ccflags}" OUTPUT_VARIABLE PERLCFLAGS)

set (CMAKE_C_FLAGS ${PERLCFLAGS})
and your build will be just fine!

Running With New Shoes

Over the past few months, really since June, I've been trying to get back into some kind of shape. I had taken nearly two and a half years off from going to the gym after having gall bladder surgery to remove a stone the size of a golf ball! At first I didn't go back to the gym because I was deathly afraid of ripping open a scar . Then I kind of got complacent and didn't work out, convincing myself "I'll start next month". Then finally I was too busy eating donuts and coffee to be bothered with working out.

20 pounds later, it was time to take it all back.

One of my biggest challenges was running. I always thought I hated running, and any time my workout came to the cardio phase, I felt a sense of dread as I climbed onto the treadmill to run. It was so bad that for a long time I would get nasty cramps in my calves that stopped me from running for a week at a time.

When I started back to working out, I decided to focus on getting my running game up to speed, so to speak. Starting slow, my goal is to get to the point where I can run 5 or 6 miles without feeling like I'm going to DIE! Which feels like an unrealistic goal when, to start, running barely a mile made me want to punch a baby and puke. But I persevered over the summer and now have work to the point where a near four mile run doesn't make me feel destroyed: on Friday I ran 40 minutes and, with warm ups, covered nearly 3.8 miles.

To reward myself for this milestone, I went out and bought a new pair of running shoes. I've been looking for a while at the various light weight shoes to get a more natural feel when running. And on Saturday I went over to the local sports shop to check out their stock. I settled on the Fila Skele-toe minimalist shoe. They have a nice, snug feel without being too tight. The fit is adjustable with a Velcro band over the top and two bands on either side of the heel on each foot. Each toe has its own sleeve except for the pinkie and fourth toe on each foot.

Running for the first time in them today (after spending the weekend wearing them around the house while my wife and oldest son made jokes about them) it was different, but not a weird experience. I had read on a few pages online that running in minimalist shoes was an adjustment and you should ease yourself into it. So I started off slowly, reducing my running speed to start off. But it felt so natural I decided to try some incline runs, which felt good! The shoes didn't feel like they were working against me, and my calves felt more relaxed as I ran. I didn't feel like my foot was on a strange fulcrum is probably the best way to describe the experience.

All in all, I'm pretty impressed with the shoes. I didn't wear socks with them, and except for a small spot on my right foot where I wasn't quite used to the new shoe's feel (which will go away) I'm quite happy with the new shoes!

Wednesday, August 29, 2012

Elephants Reunited After 20 Years

Jane Goodall points out to us that animals and humans aren't that different, that we're not so greatly separated from them by our emotions or feelings. Here's a video of two elephants that have been reunited after 20 years of separation.
"...we have found that after all, there isn’t a sharp line dividing humans from the rest of the animal kingdom. It’s a very wuzzy line. It’s getting wuzzier all the time as we find animals doing things that we, in our arrogance, used to think was just human."

Thursday, August 9, 2012

Git: Fixup And Autosquash


Two features that were introduced in Git 1.7 that I only recently learned about are making my development life so much easier! They are autosquash and fixup.

What these two features do is let you do incremental commits that are automatically staged so that, at a later time, you can squash them all together into a single commit. That may not sound like a big deal, but when you're working with a lot of tiny changes and you want to eventually combine them, then it's a definite time saver.

MY OLD WAY

The way I used to work was to commit simple changes with messages like "merge with language binding" or "merge with SSL update" and similar messages. And these worked fine until you have a whole bunch of commits and you need to move them all around so that they're under the target commit. Not impossible, but somewhat unwieldy at least.

After I would get to a point where I was happy with the incremental changes, I would do an interactive rebase and move the commits around and squash them.

mcpierce@mcpierce-laptop:assembler (master) $ vi cpuid.s
mcpierce@mcpierce-laptop:assembler (master) $ git commit -a -m "merge with cpuid"
(do some more changes)
mcpierce@mcpierce-laptop:assembler (master) $ vi cpuid.s
mcpierce@mcpierce-laptop:assembler (master) $ git commit -a -m "merge with cpuid"
(now go and put them together)
mcpierce@mcpierce-laptop:assembler (master) $ git rebase -i HEAD~3

And at this point I have the commits and just need to change "pick" to "squash" for the commits I want to squash together.

pick dd1893c First assembler app: get CPUID information.
pick 5741c73 merge with cpuid
pick 8b5dffb merge with cpuid

# Rebase dd1893c..8b5dffb onto 04aad6e
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

You can see in the above my two addition commits are lined up. At this point I have to change "pick" to "squash" in the last two and then save this in order for the squashing to occur. Not a huge effort, sure, but imagine if you had 15 or 20 such small commits and wanted them all to be ready to squash? You could use some vi search and replace magic, but how about leveraging git itself to do the job for you?

AUTOSQUASH

If you want to have git do the job for you, then all you need to do is modify the commit message when saving the incremental changes as such:

mcpierce@mcpierce-laptop:assembler (master) $ vi cpuid.s
mcpierce@mcpierce-laptop:assembler (master) $ git commit -a -m "squash! First assembler app: get CPUID information."
(do some more changes)
mcpierce@mcpierce-laptop:assembler (master) $ vi cpuid.s
mcpierce@mcpierce-laptop:assembler (master) $ git commit -a -m "squash! First assembler app: get CPUID information."
mcpierce@mcpierce-laptop:assembler (master) $ git rebase -i HEAD~3 --autosquash
Now when you're ready to commit you'll see:

pick dd1893c First assembler app: get CPUID information.
squash 5741c73 squash! merge with cpuid
squash 8b5dffb squash! merge with cpuid

# Rebase dd1893c..8b5dffb onto 04aad6e
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

Git has already lined up the commits below the commit into which they will be squashed. Now when you save the rebase instructions, git will prompt you for the message on the squashed commit!


# This is a combination of 3 commits.
# The first commit's message is:
First assembler app: get CPUID information.

# This is the 2nd commit message:

merge with cpuid

# This is the 3rd commit message:

merge with cpuid

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# Not currently on any branch.
# Changes to be committed:
#   (use "git reset HEAD <file>..." to unstage)
#
#       new file:   assembler/cpuid.s
#
# Untracked files:
#   (use "git add <file>..." to include in what will be committed)
#
#       assembler/cpuid
#       assembler/cpuid.o


But, wait! There's more! What if you don't want to change the original commit's message and just want to add these additional commits to? Here is where you'll use the fixup option.

FIXUP

If you don't want to bother with the commit message for the subsequent changes, then you can use the "fixup!" modifier instead of "squash!" as follows:


mcpierce@mcpierce-laptop:assembler (master) $ vi cpuid.s
mcpierce@mcpierce-laptop:assembler (master) $ git commit -a -m "fixup! First assembler app: get CPUID information."
mcpierce@mcpierce-laptop:assembler (master) $ git rebase -i HEAD~3 --autosquash

In these cases, during your rebase, you'll see:


pick dd1893c First assembler app: get CPUID information.
fixup 7032ee8 fixup! First assembler app: get CPUID information.
fixup 8320bbd fixup! First assembler app: get CPUID information.

# Rebase dd1893c..8320bbd onto 04aad6e
#
# Commands:
#  p, pick = use commit
#  r, reword = use commit, but edit the commit message
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#  f, fixup = like "squash", but discard this commit's log message
#  x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out


Notice that, instead of squash, the lines now say fixup. When you save this rebase git won't prompt you for a commit message. Instead, it will just squash things together and save them using the original message.

Thursday, July 26, 2012

Fisher Price Is Awesome!

For Xmas this past year, my wife and I gave our daughter a Fisher Price Kid Tough See Yourself camera, which she absolutely LOVES to death.

Literally.

By March of this year she had accumulated hundreds of photos and videos. And when it was warm enough to go outside she started taking pictures of bees, flowers, anything out in the front or back yard.

A budding photographer!

Then, in May or June, the speaker started to sound bad. The camera will take video and play it back. And when she would listen to anything it gradually started sounding like paper ripping. But copied onto the computer everything sounded fine.

So in June I contacted Fisher Price customer support to see if we could send it in to be fixed or replace it with a refurbished camera. But Fisher Price did us one better.

They sent Rachel a brand new camera as a replacement!

It arrived yesterday (in a box that was made for packaging Twilight dolls...how embarrassing!) and you should have seen her face light up when she saw her new camera! She was so thrilled to have it she went right to taking pictures and videos again!

Thank you Fisher Price! You made my little Princess the happiest girl in the world!


Monday, July 16, 2012

Linux Kernel Development...

As a long time user of Linux, I've set the goal for myself of contributing to the Linux kernel by the end of the calendar year (2012). And I've been reading up on the subject, both the kernel internals as well as the details of the POSIX interface.

But now it's time to actually get started on it.

So I've created my own repo for kernel development, and am starting to pour over the bug reports for the kernel. I want to start off with fixing some of the easier bugs to get some experience with the code.

Any suggestions on what areas are good for getting started?


Tuesday, June 12, 2012

Fedora: Restoring Accidentally Deleted Files Using YUM...

While working on a bug, I needed to install some Python libraries from RHEL. Since I didn't want to go through a long process of downloading the source RPM, building it for Fedora, installing it and its dependencies, etc. I thought I'd be slick and just install the library using the setup.py file. Then I can just delete the library when I'm done.

Yeah, that ALWAYS works, right?

So while trying to delete the installed libraries, I managed to accidentally type:

rm -rf /usr/lib

before hitting Enter by accident. I stopped it, but not before deleting directories some might consider important. So, after a brief moment considering how it's going to really be a pain in the ass to reinstall my system, I figured I'd try restoring the deleted files and directories.

Solution

I first got a list of all of the RPMs that have files in the /usr/lib directory. I had to be specific since my laptop uses 64-bit packages, so I had to limit the search to only /usr/lib and ignore /usr/lib64. So I built out the list of packages using the following command line:

for file in $(rpm -qla|grep ^/usr/lib/); do echo $(rpm -qf $file); done | uniq | sort > /root/packages.ls

This part took a VERY long time since it's examining every file in every package, looking for those that have /usr/lib/ in their filename and collecting the package name. But when it's done this file contains the list of all packages I'm interested in re-installing in order to get my laptop back in order. So I then go through and do the re-installation using:

yum reinstall $(cat /root/packages.lst | uniq)

When this finished, all of the packages in question were re-installed on my system. Granted I had a few cases where I had to remove old packages that had some conflicts (not sure how I got into such a state, but that's a different blog post). But when all was said and done, the system was back in working order.

Monday, June 11, 2012

QEMU/KVM: Unable To Restore Saved VM

Before upgrading my laptop to Fedora 17, I had saved a VM off, persisting its state and shutting it down. After my upgrade completed I was unable to restore the VM. Instead I consistently got "Error: Connection reset by peer". And there was no way to bring the VM back up again afterward.

Solution

To fix this problem, you need to remove the saved state for the VM using the following virsh command:

virsh managesave-remove [VM name]

substituting in the VM's name.

Once that was completed, the VM was able to start up again.

Details And Bug Report

Checking the logs I saw the following error:

2012-06-11 15:02:11.920+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /usr/bin/qemu-kvm -S -M pc-0.14 -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name Windows7 -uuid dda2225d-0e57-8396-eff5-ad4da9d9febc -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/Windows7.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/Windows7.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive file=/var/lib/libvirt/images/MeetManager.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=22,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:db:50:0d,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -vga qxl -global qxl-vga.vram_size=67108864 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -incoming fd:20 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
Domain id=1 is tainted: high-privileges
char device redirected to /dev/pts/11
do_spice_init: starting 0.10.1
spice_server_add_interface: SPICE_INTERFACE_QXL
red_worker_main: begin
display_channel_create: create display channel
cursor_channel_create: create cursor channel
pulseaudio: pa_context_connect() failed
pulseaudio: Reason: Connection refused
pulseaudio: Failed to initialize PA contextaudio: Could not init `pa' audio driver
ALSA lib pulse.c:243:(pulse_connect) PulseAudio: Unable to connect: Connection refused

file mcoputils.cc: line 499 (static std::string Arts::MCOPUtils::mcopDirectory()): assertion failed: (home != 0)
sdl: SDL_OpenAudio failed
sdl: Reason: No available audio device
sdl: SDL_OpenAudio failed
sdl: Reason: No available audio device
audio: Failed to create voice `dac'
audio: Failed to create voice `adc'
qemu: warning: error while loading state for instance 0x0 of device '0000:00:02.0/qxl'
load of migration failed
2012-06-11 15:02:32.936+0000: shutting down

Which led at least one developer to suspect that the SPICE audio portion could not handle the migrate-to-file->upgrade->migrate-from-file scenario. I've filed this Bugzilla to get it fixed.

Ruby: Fix for "timeout.rb:60: [BUG] Segmentation fault"

After upgrading my laptop from Fedora 16 to Fedora 17, I hit a big problem while working in Ruby. Since Fedora changed from Ruby 1.8.7 to Ruby 1.9.3 I've decided to work more with RVM so that I can continue to develop across versions.

Problem

One of the tasks I'm working on requires developing native extensions. So for Ruby 1.8.7 I needed to install the various rake, rake-compiler, rspec and other gems for use under RVM. But when I tried to install them I had this error come up consistently:


mcpierce@mcpierce-laptop:ruby (Different-Runner-model) $ gem install rake
/home/mcpierce/.rvm/rubies/ruby-1.8.7-head/lib/ruby/1.8/timeout.rb:60: [BUG] Segmentation fault
ruby 1.8.7 (2012-06-10 patchlevel 368) [x86_64-linux]

Solution

I needed to reinstall Ruby 1.8.7 under RVM with the following compiler flags:

mcpierce@mcpierce-laptop:ruby (Different-Runner-model) $ CFLAGS="-O2 -fno-tree-dce -fno-optimize-sibling-calls" rvm install 1.8.7
Installing Ruby from source to: /home/mcpierce/.rvm/rubies/ruby-1.8.7-p358, this may take a while depending on your cpu(s)...

ruby-1.8.7-p358 - #fetching 
ruby-1.8.7-p358 - #extracting ruby-1.8.7-p358 to /home/mcpierce/.rvm/src/ruby-1.8.7-p358
ruby-1.8.7-p358 - #extracted to /home/mcpierce/.rvm/src/ruby-1.8.7-p358
Applying patch 'stdout-rouge-fix' (located at /home/mcpierce/.rvm/patches/ruby/1.8.7/stdout-rouge-fix.patch)
Applying patch 'no_sslv2' (located at /home/mcpierce/.rvm/patches/ruby/1.8.7/no_sslv2.diff)
ruby-1.8.7-p358 - #configuring 
ruby-1.8.7-p358 - #compiling 
ruby-1.8.7-p358 - #installing 
Removing old Rubygems files...
Installing rubygems-1.8.24 for ruby-1.8.7-p358 ...
Installation of rubygems completed successfully.
ruby-1.8.7-p358 - adjusting #shebangs for (gem irb erb ri rdoc testrb rake).
ruby-1.8.7-p358 - #importing default gemsets (/home/mcpierce/.rvm/gemsets/)
Install of ruby-1.8.7-p358 - #complete 
Please be aware that you just installed a ruby that requires 2 patches just to be compiled on up to date linux system.
This may have known and unaccounted for security vulnerabilities.
Please consider upgrading to Ruby 1.9.3-125 which will have all of the latest security patches.

Once I did this then install gems installs worked as expected.

Wednesday, May 16, 2012

Louis C.K. On Same-Sex Marriage

Granted some of his terms are harsh, but his heart's in the right place. This guy is awesome!

Tuesday, May 8, 2012

Maurice Sendak (1928-2012)


You gave all of our childhoods a lift with your wild imagery and your creative stories. I was happy to share that with my children as well, and know your work will live on forever. You will be missed.

Sunday, May 6, 2012

The Inconceivable Nature Of Nature - Richard Feynman On Light

In the song "We Are All Connected" there's a portion taken from a video interview of Richard Feynman. And being a fan of Feynman's writing I went looking for that interview. Here's the segment that contains the specific quotes:




Below is the video for the song. It features clips from others that I admire as well: Carl Sagan, Bill Nye and Neil deGrasse Tyson. If you go to the actual quotes, it's actually quite moving.



Friday, May 4, 2012

Science Is Interesting - Neil deGrasse Tyson "Rebukes" Richard Dawkins

Last night I was chatting with Mike Z. and April on Xbox Live when Mike mentioned a certain exchange between Richard Dawkins and Neil deGrasse Tyson. Not a negative exchange, but definitely one with a point to it.



And I agree with NdT, there needs to be persuasion in education. Simply presenting facts just isn't enough, especially when it can be overwhelming or, worse yet, intimidating to the person hearing. I'm not criticizing Dawkins: honestly, I've only read his books and not heard or seen him speak. And while I appreciate (and to a degree reflect) his cynicism I do feel that it's not the smoothest road to educating people.

The way to look at it is like this: ignorance is appealing. Why is it appealing? Because it makes someone feel comfortable. It's comforting to think there's an invisible skydaddy who watches over you. it's comforting to think that after you die there's ANOTHER chance to live, and this time forever.

And it's FRIGHTENING to think realize how ignorant we are of so much in reality.

And when someone is frightened they withdraw to the things that provide them comfort, even if it's not the truth. So we can't bring those who are ignorant, willfully or otherwise, into the world of rational truth while triggering their flight response back to their comfort in ignorance.  We have to show them that reality, truth and science are better than ignorance. We have to give them the tools to understand and learn, and then show them the way.

Monday, April 23, 2012

Creating A Source Tarball From A Github Commit

I won't claim this: I got the initial script from Eric Smith in the Fedora development mailing list. I then enhanced it a bit to make it more useful for other Github projects on which I work. But I definitely wanted to share this with everybody since it's pretty sweet and simple. To get a source tarball based on a specific git checkin with Github, you can use the following script:





#!/bin/sh

usage() {
  printf "Usage: ${ME} COMMITHASH [username] [project]\n"
  printf "\n"
  printf "\t[username] -- Overrides the default in ~/.tarballrc\n"
  printf "\t[project]  -- Override the default in ~/.tarballrc\n"
  printf "\n"
  printf "EXAMPLE ~/.tarballrc file:\n\n"
  printf "TBUSERNAME=[your username]\n"
  printf "TBPROJECT=[my project's name]\n"
  printf "\n"
}

ME=$(basename ${0})
die() { printf "$@\n"; exit 1; }

if [[ "${1}" == "-h" ]]; then
  usage
  exit 0
fi

[ -s ~/.tarballrc ] && source ~/.tarballrc

username=${2:-$TBUSERNAME}
project=${3:-$TBPROJECT}
commit=${1}


if [[ -z "${username}" ]]; then die "You must provide a username."; fi
if [[ -z "${project}" ]];  then die "You must name a project."; fi
if [[ -z "${commit}" ]];   then die "You must specify a commit."; fi


REPO="git://github.com/${username}/${project}"


git clone ${REPO}


( cd ${project} && \
  git archive --format=tar --prefix=${project}-${commit}/ ${commit} \
) | xz - >${project}-${commit}.tar.xz


This script will take as input 1) the commit hash for the checkout, and optionally 2) the username for the Github repo and 3) the Github repo name itself. So, for example, to clone my Newt Syrup repo, you could use:

tarball 6f0056 mcpierce newt-syrup

and this would give you the code for all commits up to 30 July 2011.



The Valve Employee Handbook

I heard about a leaked employee handbook today and was intrigued, to say the least. As a professional open source developer who used to write games, I was interested to say the least. And what I have to say is:

Wow, that sounds very cool.

No management structure. Self-directed projects. A true meritocracy with anonymous peer reviews. All sorts of freedoms that sound very good on the surface.

And one thing that really rings true is the part about how working long hours is a bad thing. it's been an issue for me for years now, how some people treat working a 60+ hour week as the norm. I won't go into it here and will instead post on that subject later. Just suffice it to say that I love to hear a company say that, when long hours happen, it's a sign of a problem in the company and should not be viewed as how things ought to be.

Go give it a read. It's a nice glimpse (assuming it's for real) into the world of the gaming company that has brought us such things as the Portal and Left4Dead franchises, Half-Life and Team Fortress 2. And if you're interested in working for them, then check out their jobs page on their website.

Wednesday, April 18, 2012

Telling Ruby Where To Find C++ Headers In Native Extensions

On my current project I'm responsible for maintaining the Ruby language bindings for some C++ code. And part of what I've worked on was to write some Ruby extensions in C to overcome some blocking I/O issues. So initially I wrote a simple extconf.rb file that looked like this:

require 'mkmf'
create_makefile('nonblockio')

and that was it.

Now, a few months later, we need to have some enhancements to this. And we also need to support two versions of Ruby (1.8 and 1.9). So I decided to enhance this simple extconf.rb file to ensure that certain libraries and headers were present depending on the version of Ruby being used.

The problem is that some of the header files for our project themselves include headers from the C++ standard library. And, by default, Ruby will use gcc on Linux when testing for headers. So when the following snippet executed in extconf.rb:

fail("Missing header file: exceptions.h") unless have_header("qpid/messaging/exceptions.h")

it would fail in exceptions. h includes <string> and gcc doesn't know how to load a C++ header file.

Telling Ruby To Use C++

To overcome this problem, you need to tell Ruby to use a C++ compiler rather than a C compiler when necessary. To accomplish this, I added the following to extconf.rb:

Config::CONFIG['CPP'] = "g++ -E"

Now when Ruby creates the Makefile and first checks for the headers it uses g++ instead of gcc. Also, you MUST include the "-E" commandline argument: this argument tells the compiler to stop after the preprocessor stage. That way it ONLY checks that the included header file and its header dependencies are all present.

Tuesday, April 10, 2012

Fedora Packages, Ruby Gems And Patching Sources

I'm the Fedora package maintainer for several Ruby language gems. One new package I'm preparing to release is the Ruby language bindings for the Qpid messaging framework.

One of the challenges I had to overcome was to apply some patches to the release that overcome some blocking I/O issues in the underlying codebase. We decided that the first release, for Fedora 16, would be based on our 0.16 release of Qpid, which doesn't contain the fixes I've written to provide non-blocking I/O functionality. (long story short, Fedora 16 provides Ruby 1.8, Fedora 17 introduces Ruby 1.9, and Ruby 1.9 has a better threading model). I've proposed an RPM based on our 0.16 code and needed to apply a set of patches on top of that for code that's not going to be a part of the upstream codebase for 0.16.

So the challenge was, how do I apply these patches on top of a gem when creating a the RPM? I'll need to unpack the gem, apply the patches and then rebuild the gem before continuing. Not an easy task, to say the least. But one that, with a little ingenuity, was overcome with ease. And I owe thanks to my buddy Ashcrow for help with a few issues.

Part 1: Unpacking The Gem

We have a set of seven patches that need to be applied to the base gem to provide our non-blocking I/O functionality. They are defined higher up in the spec:

Patch1: 0001-Ruby-extensions-to-use-the-non-blocking-I-O-commands.patch
Patch2: 0002-Updated-the-Rakefile-to-build-the-nonblockio-code.patch
Patch3: 0003-Modified-the-Qpid-Ruby-code-to-load-the-non-blocking.patch
Patch4: 0004-Modified-the-testing-environment-to-accomodate-non-b.patch
Patch5: 0005-Updated-the-spout-and-drain-ruby-examples.patch
Patch6: 0006-Cleaned-up-the-Ruby-bindings-documentation.patch
Patch7: 0007-More-cleanups-on-the-Ruby-documentation.patch

The first thing you need to do is open up the gem in order to apply the patches to the source code. In the spec file in the %setup section we have:

%setup -q -c -T

pushd ..
gem unpack %{SOURCE0}

pushd %{gemname}-%{version}
gem spec %{SOURCE0} -l --ruby > %{gemname}.gemspec

The spec exits the buildroot (holding an anchor so it can return) and unpacks the gem, specified by %{SOURCE0}. It then enters the subdirectory that gets created (again, holding an anchor) and generates a gemspec file based on the contents of the directory. This is necessary in order to repackage the gem once we've done applying the patches.

Part 2: Applying The Patches

This is the easiest part of the whole process, but it was also the spot where Ashcrow needed to set my head straight.

In our source tree for Qpid, the Ruby language code extists at qpid/cpp/bindings/qpid/ruby. But when you're inside of the unpacked gem, you're already at the bottom level of that tree. All of the patches that get applied, though, assume you're starting above the qpid directory.

To apply the patches, you have to tell the patch macro to ignore the first n levels of nesting in the patch files:

%patch1  -p6
%patch2  -p6
%patch3  -p6
%patch4  -p6
%patch5  -p6
%patch6  -p6
%patch7  -p6

Here we it to ignore the first 6 levels of directories when applying the patches, since they're generated with the prefix of "{a,b}/"; i.e., a/qpid/cpp/bindings/qpid/ruby/lib/qpid/encoding.rb is one example of a referenced file.

Part 3: Repackaging The Gem

One of the changes in this set of patches included deleting two source files, the aforementioned lib/qpid/encoding.rb and spec/qpid/encoding_spec.rb. When the repacking step occurs the gem command choked on the missing files. So we had to fix this by removing those entries from the list of files to be packed into the gem.


# eliminate the encoding-related entries in the gemspec
sed 's/\,\ \"spec\/qpid\/encoding_spec.rb\"//;s/\,\ \"lib\/qpid\/encoding.rb\"//' %{gemname}.gemspec > %{gemname}.gemspec-1
cp -f %{gemname}.gemspec-1 %{gemname}.gemspec


gem build %{gemname}.gemspec
cp -f %{gemname}-%{version}.gem %{SOURCE0}
popd
popd

The first step uses sed in order to remove references to the two files that were removed while copying the gemspec to a backup file, and then overwrite the original gemspec with the newly updated one.

The next step rebuilds the gem and then copies it back over the original sources.

BE VERY CAREFUL HERE!

If you're building the RPM locally this will REPLACE your source gem with the modified version. So you'll want to replace the source gem after each local rpmbuild session or else you'll hit complaints from patch about trying to apply changes that apparently have already been applied.

To avoid overwriting the original SOURCE0, use the following line to install the gem:


gem install --local --install-dir .%{gemdir} \
            -V \
            --force ../%{gemname}-%{version}/%{gemname}-%{version}.gem


At this point the gem installation process proceeds as normal. The patched sources are now ready to be installed when the RPM is installed.

Monday, March 19, 2012

Lightning And Sneakers

Right now we have a thunderstorm passing over our part of North Carolina. There was the darkening of the skies this morning, and then the calm and quiet as the air pressure dropped and the birds all took to the trees and rooftops.

Then the rain started. Followed by sporadic lightning flashes.

I'm sitting at my home desk working and chatting with coworkers. One of my daughter's sneakers is sitting on the floor next to me. It's a pair of Sketchers that lights up when she walks (or runs, or hops, or dances, or walks around like a cat). It was just on the edge of my peripheral vision.

A flash of lighting outside of the window, not a bolt, just a flash. And the shoe immediately started flashing. There was no shaking of the floor. No motion on my part at all. I assume that the magnetic field created by the current in motion was just strong enough to trigger the lights in her shoes.

Science. It's an awesome skill.

Tuesday, February 21, 2012

Using notmuch to search emails from within Mutt

I am not a fan of GUI apps for all occasions. I general prefer to work in a text environment and use the keyboard rather than always mousing around and clicking to get things done. So when  my buddy, Will, showed me how to use notmuch with Mutt to search through my email I was ecstatic. Prior to this I would use command line tools to find emails and then navigate to folders to read emails. With the following setup I can now do my searching from within Mutt and quickly get to what I want.

The following instructions are how to set things up on Fedora 16. For other platforms please be sure to adjust what needs to be downloaded and/or installed from package repositories to suit your needs.

What You Need To Install

I'm going to to assume that you already have Mutt installed. In addition, you'll need to install the following packages and all of their dependencies:

  • perl-Mail-Box
  • perl-Email-Sender
  • perl-MailTools

The Script That Does The Work

You can download the notmuch-mutt script here. On my system I stored it at ~/Documents/notmuch-mutt.pl and refer to it as such in my .muttrc below.

Mutt Configuration

The big hook is to add the following to your .muttrc file. Here is what I added to mine:


### BEGIN NOTMUCH-MUTT SETTINGS
macro index <F8> \
      "<enter-command>unset wait_key<enter><shell-escape>~/Documents/mutt-notmuch.pl --prompt search<enter><change-folder-readonly>~/.cache/mutt_results<enter>" \
      "search mail (using notmuch)"
macro index <F9> \
      "<enter-command>unset wait_key<enter><pipe-message>~/Documents/mutt-notmuch.pl thread<enter><change-folder-readonly>~/.cache/mutt_results<enter><enter-command>set wait_key<enter>" \
      "search and reconstruct owning thread (using notmuch)"

With this code you can use F8 to search for messages and F9 to reconstruct the thread for one of the  search result messages.

That's it! Now with this setup you can stay in a single terminal to search.

Monday, January 9, 2012

We're Moving!

What our new office may look like.
Not the blog, but my employer.

It's been a topic for office discussions over the past few years. And now we have a destination and a time frame for the move. Though I don't know how this will affect me personally.

Right now I work from home two days per week, and only go into the office on days when I have class. And, at that, my classes at NCSU this semester are right across the street from our office which is very convenient. And, going forward, most of my classes will be on the Centennial campus in those same cluster of buildings.

But once we move to the old Progress Energy building, I won't have the convenience of being on campus. And I don't think I'll be close enough to the bus lines to grab one to class. For the past few semesters I was able to catch one Wolf Line bus that came right by our offices to classes on the Main campus. However, I don't think any of them go downtown.

But I'm at least glad we know the details for our move.

Sunday, January 1, 2012