Friday, December 9, 2011

Batman: Arkham City Lockdown

Yesterday I saw an article on Super Hero Hype about the new iOS game, Batman: Arkham City Lockdown. I don't normally buy games for my phone (free is for me) but I decided, what the hell? and bought it for my iPhone 3GS.

Now I haven't had a chance to do much more than the first tutorial, but I'm pretty stoked to be able to play as my favorite character on my phone. But I first had to get past the startup.

When I loaded the game it seemed to be stuck on the title screen for a very long time. Then, when it moved past that to the New Game screen, it immediately went back to the title screen for a while (I didn't have a chance to hit "New Game") then came back.

I had to eventually reboot my phone to get past the title screen and actually into the game. Once into the game I played the first tutorial, pitting the Batman against a single thug, to learn how to fight. You're taught three moves:

1. single punch
2. double punch
3. defensive block

The single punch is simple enough: you tap the target and then swipe your finger to the right to throw the punch.

To do a double punch you make the single punch move but then drag your finger to the left to through a combo. A few back and forths give you a longer combo.

For a defensive block you drag your finger down and the Batman blocks the attack. And, additionally, doing that when not defending causes the Batman to use his cape as a distraction, as in the game "Batman: Arkham City".

Look for a more detailed review on A Little Dead Podcast in the coming weeks.

Tuesday, November 29, 2011

Amazon Releases The Kindle's Source Code.

In a very cool move, Amazon has released the source code for their Kindle devices. I've already downloaded the source code for mine and look forward to poking through it starting tonite.

Now here's to hoping they'll also accept patches to fix problems or add new features in a true open source model!

Tuesday, November 22, 2011

Downloading Library Books To The Kindle

Last year when my wife gave me my Kindle for Xmas, one of the things that I was disappointed over was that my local library didn't support the Kindle for e-lending. I'm a little late with the news, but that's apparently changed! As of September, 2011, OverDrive now supports the Kindle as a platform for loaning books.

Of course, this doesn't mean that all books are automatically available for the Kindle. But it does mean that I can now borrow titles that are available through my local library and have them automatically delivered to my Kindle! That's a huge win for the platform, which I've really grown to love over the past year.

Now to get more than just certain romance and fiction titles to be made available. I would love to be able to borrow Carl Sagan's books or pretty much anything from the sciences and mathematics sections at the touch of a button...

Thursday, November 10, 2011

Batman: Arkham City Games Disappeared (Lost Save)

After my problem with getting the game, now it appears I'm going to have trouble playing it as well. It appears there's a serious bug in "Batman: Arkham City" related to saving the game data and retaining in between game sessions.

The first time it hit me I was annoyed. I had gotten to a certain point in the game (big boss fight with Solomon Grundy) and had saved my game. When I came back later and tried to play some more I was told that my downloaded content was corrupted and I needed to delete and reinstall. So I went to the system settings for my 360 and deleted the downloaded content. Starting the game up, it redownloaded and applied them fine.

But when I got into the game, there were no saved games. All of the slots were empty. So I assumed I mistakenly deleted my saved game data. But I didn't: I deleted DLC only.

So I started the game over again. I love the Batman and kind of dig this game (though not as much as the original for some reason). I got past the point where I was the first time around, moved on to the next mission and finished it. Then I saved off and took a break.

Now, two days later, I go to play. The first thing it says is there's new content to download. So I download it and it applies quickly.

But when I got into the game, there were AGAIN no saved games. All of the slots were empty.

So I google and find that this isn't just me that's having a problem.

Monday, November 7, 2011

TypeError: can't convert Module into Integer (error in Ruby native extension)

For about 1/2 an hour today I was stuck on a single error:

TypeError: can't convert Module into Integer

The code in question was:

static VALUE qpid_receive(VALUE arg_timeout)
{
  int timeout =  FIX2INT(arg_timeout);

  // do some work with the timeout value

  return Qnil;
}

If I commented out the line that called FIX2INT() then the error went away. But for the life of me I couldn't figure out the problem.

Then I realized my error.

After googling the error message and finding no solutions, I realized I was missing an argument. Every native extension method must have, as the first argument, a VALUE argument that is a reference to the current execution context in Ruby.

So after changing the method signature to be:

static VALUE qpid_recieve(VALUE self, VALUE arg_timeout) { ... }

the error went away.

Tuesday, November 1, 2011

Using cryptsetup to encrypt a USB drive.

This morning I received an email from the Fedora Project reminding me that this month we're expected to do a password and SSH key refresh. So that got me to thinking about how I securely store my keys separately from my laptop. I do keep a copy of the keys in an password-encrypted zip file stored on my hope server and backed up onto a separate drive, but want to do something in addition to that to keep things more secure.

So I decided to break off a section of my 4GB thumb drive, make it an encrypted drive, and store the keys there as well. Following are the steps I used to do just that.

So I did a Google search and found this page, which got me through most of what I needed to do. But a few steps that didn't work included naming the partition itself.

On the page the author recommends doing:

/sbin/e2label /dev/mapper/cryptmap “Brad’s Files”

However, for me, this failed consistently (on my system the mapper is /dev/mapper/cryptmcp). What I had to do instead was refer to the partition by its UUID instead:

/sbin/e2label /dev/mapper/udisks-luks-uuid-ec00b89c-8a65-4fa5-8a9d-de0b7ecc5efa-uid500 "McPierceSecure"

Disconnecting and reconnecting the drive, entering my password and now the device is mounted with the correct label.

Accessing The Drive As A User

When that was all done, I found I wasn't able to write to either partition as my regular user. To fix that, just run the command:

sudo chown mcpierce:mcpierce /media/McPierceSecure

Verify by unmounting, removing then re-inserting the drive.

Tuesday, October 18, 2011

Best Buy + XMEX = Customer Service Failure

Well in advance of its release I ordered Batman: Arkham City from Best Buy. I did this mainly because there are extras (the Tim Drake-Robin playable character and maps) that were only available if you pre-ordered through them, with the guarantee that it would arrive on the release date of 18 October.

And I was even more excited when I got an email on 13 October telling me that my package had shipped. And I attributed the tracking link on the order that went to a site which said it had no record of my package to the package having just shipped out.

But now it's the 18th. The day the package is supposed to ARRIVE. The link STILL goes to a site (Streamlite, which used to be XMEX) that says it has no record of my package.

So I called Best Buy customer service to find out what's going on with the order. And I was TALKED OVER by a CS rep who said that the tracking information isn't updated until the package is delivered.

Then what's the purpose of the tracking link if I can't find out the package's location until it's in my hand? She tried to say that the number is from the USPS and that they don't scan packages until they're delivered. This, of course, is completely untrue. I've had packages shipped via USPS before and they've given me status from the moment the package was picked up.

And, either way, if that's the case then shouldn't Streamlite/XMEX have updated the status on their site (since that's where the link goes) to say "Delivered to US Postal Service" or something similar? Why all the shenanigans?

Then she explained that my package may not show up until next week on the 25th because I chose the wrong shipping method; i.e., ground shipping does not guarantee delivery on the release date.

In that case you might want to, I don't know, put that on your website if that's the case. When I ordered I selected to have the package delivered on the release date and was not warned that shipping via ground would be later.

All in all a horrible customer service experience. I won't be pre-ordering items again through Best Buy based on this event. The extras just aren't worth this sort of poor customer service.

Monday, October 3, 2011

The Return Of Suspend/Resume Capability!

When I upgraded my laptop1 to Fedora 15 and kernel 2.6.38.6-26 the first thing I noticed was missing was the ability to suspend/resume and also hibernate my laptop. Whenever I would try to do these things my laptop would either lock up completely or else would come back up without my previous session.

So it was a pain in the ass to have to shutdown my laptop in order to go to work, leave at night or anything requiring travel. Worse yet, I couldn't dock or undock my laptop without it locking up solid.

That is, until the latest kernel update:

mcpierce@mcpierce-laptop:~ $ uname -r
2.6.40.4-5.fc15.x86_64

With this my laptop is now back in business. I was able to suspend my laptop this morning when I was ready to come to work, and then dock it at my desk and was immediately online.

Nice!

1 - Thinkpad W510+ with nVidia Quadro FX 880M, 4GB RAM and Intel i7 Q 820 CPU.

Sunday, October 2, 2011

You must change your password/You have reached password changes for a 24 hour period

In episode 120 of my podcast I announced that DC Universe Online was going to a free-to-play model in October. So with it being the 2nd of October I thought I'd go ahead and download the client and give it a try.

The first thing I found was that the free-to-play model isn't active yet. WTF?

The other, and more annoying thing, is that the Sony Online Entertainment website required that I change my password. So, of course, I had no choice but to do so. Then the system kicked me back to a login prompt again after the password change. The change obviously worked since I could use it to login.

But it then said I needed to change my password AGAIN in order to go to the DC Universe Online site to download the client. And when I tried to change my password, it said I had exceeded the number of times you're allowed to change your password in a 24 hour period.

Really? It's not like I WANT to change it.

So now I can authenticate, but can't get past the "you must change your password" page. EVEN THOUGH I DID THAT ALREADY.

/me sighs and shakes head slowly...

And I'm still a little annoyed that it's October and the game's still not free.

Wednesday, September 28, 2011

Ruby error: uninitialized constant Qpid::Messaging::Tracker

My current task has me working on a non-blocking send queue for the Ruby bindings of our messaging toolkit. Long story short, I've developed a long-lived thread model that services a queue that contains message references that are popped off as the sender's outgoing capacity opens up enough to allow messages to go out without blocking the global interpreter.

The Problem

But in the process of doing that, I hit a very strange error:

/usr/lib/ruby/gems/1.8/gems/rake-0.9.2/lib/rake/ext/module.rb:36:in `const_missing': uninitialized constant Qpid::Messaging::Tracker (NameError)

The two classes in question are named SendQueue and Tracker. SendQueue uses Tracker to monitor the capacity so it will know when a slot opens up to send a message. In the constructor for SendQueue there's the line:

def initialize
  #Tracker is a singleton, so we grab the one instance
  @@tracker = Qpid::Messaging::Tracker.instance
end

The line fetching the instance resulted in the above mentioned error. If I explicitly required the tracker module:

require 'qpid/tracker'

the problem would go away. But that shouldn't be necessary, and other code in the library that uses sibling classes don't need to require those siblings. So I was stumped why this problem was happening.

The Solution

The problem was with the lib/qpid.rb file. When you use this library you would just need to:
require 'qpid'

which then evaluates the following module:
require 'qpid/errors'
require 'qpid/duration'
require 'qpid/address'
require 'qpid/encoding'
require 'qpid/message'
require 'qpid/send_queue'
require 'qpid/sender'
require 'qpid/receiver'
require 'qpid/session'
require 'qpid/connection'
require 'qpid/tracker'

The problem is that qpid/tracker wasn't evaluated until AFTER qpid/send_queue which meant that the SendQueue class is evaluated before Ruby knows that Tracker exists.

So to solve it, I had to break away from my OCD need to keep things sorted alphabetically and instead move qpid/tracker above qpid/send_queue:

require 'qpid/errors'
require 'qpid/duration'
require 'qpid/address'
require 'qpid/encoding'
require 'qpid/message'
require 'qpid/tracker'
require 'qpid/send_queue'
require 'qpid/sender'
require 'qpid/receiver'
require 'qpid/session'
require 'qpid/connection'


As you can see, simply moving the require line up in lib/qpid.rb solved the whole problem.

Thursday, September 1, 2011

Adding Git branch to your command prompt...

It's gotten a bit annoying having to type:

git branch

just to see where I am in my current git repo. So I added the current branch name to my prompt. And, additionally, if I'm not in a git repo then nothing is displayed.

Here's the code from my ~/.bashrc file:

# Setup for my prompt

COLOR1="\[\033[1;36m\]"
COLOR2="\[\033[0;32m\]"
COLOR3="\[\033[1;33m\]"
COLOR4="\[\033[1;37m\]"

PS1="$COLOR3\u@\h$COLOR2:$COLOR1\W $COLOR4\$(ruby -e \"print (%x{git branch 2> /dev/null}.grep(/^\*/).first || '').gsub(/^\* (.+)$/, '(\1) ')\") $COLOR1\\$ $COLOR4"

export PS1

The key here is the segment passed into ruby, run in a subprocess. It runs "git branch 2" to find the name of the current branch. If it gets a nil result then it knows we're not in a git repo and it returns an empty string. Otherwise, any result returned is placed within a pair of parantheses.

This give us a nice prompt such as:

mcpierce@mcpierce-laptop:cpp (master)  $

which is very nice. Now I know when I'm on the master branch, my upstream branch or some other branch. No more accidentally forcing a non-fast forwarding commit into the remote repo for me...

...hopefully.

Wednesday, July 27, 2011

You have to be careful with eggs...

...because they are, without a doubt, the most annoying food on the planet.

I'm not sure when it happened. Maybe 8 or 9 years ago. But when did eggs stop being easy to peal after hard- or soft-cooking?

Seriously. It's the most annoying thing in the world. You cook your egg, you cool it. You crack the top and bottom and then roll the egg to loosen the shell. Then, as you start peeling, the white meat of the delicious interior starts pulling off in chunks and strips, leaving you with quite a percentage less egg than you were expecting.

Take today, for example. I made a simple breakfast of two soft-cooked eggs and toast with butter. I placed the eggs out for 20 minutes to warm up a little. I then put them in a pot and covered them with cold water by an inch. Placing the pot on a burner at high, I brought the whole thing to a boil, then dropped it to a simmer for 3 minutes.

After the time was up, I poured off the hot water and doused the eggs in cold water. When they were ready to handle (about 30 seconds) I took them out and started peeling.

And lost about 20% of the white as it came off with the shell.

I tried to peel just the shell, and also by gripping the membrane between the shell and the white. But, no matter how I tried, the membrane remained stuck to the albumen and pulled bits and chunks out that I would have rather eaten.

And this travesty has been going on for years now. I love hard-cooked eggs as a snack or for breakfast. But I'm frustrated as hell with the loss of good egg meat. Seriously, about 50% of the eggs I cook turn out this way. I've tried all kinds of techniques: eggs in cold water brought to a boil then left to return to room temperature, or brought to a boil and then doused in cold water and ice, or left to boil for up to five minutes. I used fresh (less than a week old) eggs as well as older eggs (where the membrane pulls away from the albumen. Nothing seems to affect my egg experience.

What am I doing wrong? Any suggestions?

Saturday, July 23, 2011

Yeah, I wasn't really interested in playing the game...

For my birthday last week, one of the presents my wife gave me was a copy of Left4Dead2 for the PC. I had been asking about it for a while, ever since a movie maker told me how he had used the game engine for the effects on his film and planned to release the maps for gamers.

After a week of work and swim team-related distractions, I finally sat down today to install the game and begin enjoying some zombie-killing goodness.

Boy, was I surprised.

I installed the game from disk. Well, more exactly, I inserted the disk and it installed the Steam game engine. It then installed Left4Dead2 from disk. But it will not let me play the game until it downloads some updates.

About 2 hours worth of updates.

What the HELL? I can't even play the game while it downloads? I can't do ANYTHING with the game until it finishes downloading.

In TWO HOURS.

I have a high speed broadband internet connection. But the game is going to block me from playing while it pulls down updates I didn't even ask for. And pausing the update process so I could, I don't know, PLAY THE VERSION I INSTALLED, is not allowed. I can't play the existing version until it finishes update.

IN TWO HOURS.

Valve, seriously, this is ridiculous. Why should I HAVE to download your updates to play the game? Why can't I play the version I have installed? Sure, I can see requiring updates in order to play online. That way everybody's playing the same version and there's no chance of version mismatches causing problems in game play.

But if I want to play locally, why should i have to download an update? And, at that, how about ASKING ME FIRST? Or giving me some warning that you're going to tie up my network by pulling down GIGABYTES of updates? As the person who owns the hardware and the network pipe don't you think I deserve at least SOME input on this process?

Really, you've pissed me off with such a poorly thought out process...

Friday, July 15, 2011

Altercation

–noun; from alternat(iv)e + vacation

1. the act of visiting someone whom you normally avoid due to their excessive drama:  
 Last year we went to Disney World, but this year I enjoyed an altercation with my mother where she complained the whole time about how I don't visit her enough.
2. the time spent with someone who causes you grief, anxiety or excessive agitation when you could have been spending that time relaxing, drinking or something more pleasurable, like having your gums scraped:
The altercation with my sister went as expected after I suggested she behaved like white trash and she took it the wrong way.
Origin:
2011

Tuesday, July 5, 2011

What is justice (or, can you just "know" someone is guilty)?

Yeah, I'll admit it. I was distracted by the Casey Anthony trial to some degree. Today the verdict came in, and she was found not guilty on the counts of murder and child abuse. And, of course, there was a huge response from people about how there was no "justice for Caylee".

But, let me ask this: is it "justice" to convict someone when there's no direct evidence linking them to the crime?

Yeah, I know, a lot of people "just know" that Casey did it. And, honestly, I'm not disagreeing that she's the most likely the person to have caused the little girl's death, whether intentionally or not.

But you're not going to get justice by taking such a person and declaring them guilty in the absence of any direct evidence.

I'm not going to go into the details about searches for chloroform or smells in trunks or any of that. It's not relevant to my point about justice. If there is no direct evidence linking the accused to the death then you just can't find them guilty. No matter how much you "just know" they're guilty, that's not enough to convict someone.

So whether she did it or not, she's "not guilty" based on the evidence presented.

And a court that works in such a manner ensures justice for everyone.

Wednesday, June 29, 2011

Virtual Machine Manager, Windows VMs and "Unable to create cgroup"

For my project at work, I need to have a Windows machine or two laying around for testing. The cheapest, and easiest, for me is to have virtual machines on my laptop: one for Windows 2008 Server and one for Windows 7.

With the upgrade to Fedora 15 some issues came up with that environment. I had deleted my old VMs and wanted to create two new template VMs (ones I can keep and clone for when I need an actual working VM). So I downloaded the ISOs from my MSDN account and started the process of creating the VMs.

Only thing is, I was stopped in my tracks. After setting up the VM (32-bit, 1 CPU, 1024M RAM, 50G storage) I got the error:


Unable to create cgroup for Windows2k8Server: No such file or directory

and the whole VM creation stops right there. So filed a Bugzilla against Virtual Machine Manager to get it fixed. In talking with the lead developer I found that problem is not with Virtual Machine Manager but with systemd itself.

To work around this if you're hitting the problem, and until it can be fixed in systemd (Bugzilla), you need to just stop and then start the libvirtd process and then create your VM. After I did that it worked as expected.

Monday, June 27, 2011

Forcing have_header to properly handle C++ header files...

The Problem

In working on the Qpid Ruby bindings, I've decided to keep on as proper a path as possible with the environment. So part of that is to have the gem's installation verify that the header files necessary to build the native libraries are present.

But that right away became a problem, since the underly Qpid code is written in C++ and, by default, Ruby's native extensions default to using C. So when, in extconf.rb, the code attempts to verify certain headers from Qpid that themselves depend on C++ headers the verification would "fail" since C can't find those headers.

For example, in qpid/messaging/Address.h there's an include for the string header in C++:

#include <string>

In mkmf Ruby attempts to validate that qpid/messaging/Address.h is present by creating a temporary c program that has:

/* begin */
#include <qpid/messaging/Address.h>
/* end */

Inside of Address.h there is:

#include <string>

and while GCC is compiling the the temporary sources the preprocessor choke son the above include and Ruby decides that Address.h doesn't exist.

The Solution

To force Ruby to use your C++ compiler to validate headers as well, simply use:

$CFLAGS = "-x c++"

or else add:

with_cflags("-x c++")

depending on which way you prefer to defined the compiler flags. Once you do that, C++ will be used to verify headers and libraries.

And if you're using C++ for your Ruby extensions then that's what you'd want to do anyway.

Monday, June 20, 2011

Disappointed at the theater this weekend...

On Sunday, for Father's Day, Christene and the kids took me to breakfast at IHOP. Afterward they gave me my card, which had a photo of grill utensils, and we drove to Lowes Hardware so I could pick out a grill.


(Yes, I've lived in North Carolina for 13 years now without ever owning a grill)

But as we were waiting for the guy at Lowes to first get us the grill and then help me to load it into the van (he never did come back and I had to get someone else to help me) Christene kept saying we were somewhat pressed for time.

The reason was that the third present for Father's Day: tickets to see "Green Lantern" in 3D! She dropped Caleb, Ben and me off at the theater.

So the boys and I got drinks (no need for food since we'd just had lunch) and went in to wait for the film.

And right away I should have realized something was wrong when, during the wait, there was no video shown on the screen. Usually there's the standard set of commercial images the theater cycles through, their advertisers, while a pseudo-radio station plays other advertisements. But we only had the audio.

At 1:05p, five minutes before the film was to start, I called the theater's number and asked why there was no video present. By 1:10p the theater manager came in to say that the bulb had burned out on the projector and they were going to replace it and that should only take a few minutes.

Ten minutes later he came back in to say they identified the problem (which I thought they had already done) and were fixing it and it should be 5-10 minutes till the film was on. And, to make up for that, they would start it without the trailers (BOO! I LOVE TRAILERS).

Finally, after half an hour, the film started.

But, during the climactic battle sequence at the end of the film THE AUDIO SHUT OUT. It went back to playing the pseudo-radio station, and we couldn't hear anything that was being said by Hal Jordon or certain others who were on screen at that moment.

So I again (for the fourth time) called the theater from my seat and asked if anybody was aware the audio was off.

Someone fixed it, but it was after the battle had finished. So the dramatic exchange was completely lost on us. To compensate us, the manager did give everybody free tickets to another 3D showing of either GL or another film. But, really, I was so looking forward to THIS SHOWING and was very disappointed at the bungling at the theater. They've never been this inept before, so I hope it was just a one-time thing.

And, for those of you were were thinking this was going to be about the "Green Lantern" film itself: well, that too was a little disappointing. Granted I went in hoping that the producers could do justice to my favorite Ring Slinger. And they did do a good job of portraying Hal Jordan, Killowog, Sinestro and others. I would have loved more Lanterns (Kehaan or Laira or Boodikka) or maybe a nod to Jon Stewart or even Saalak at some point.

But where I think they fell short was in putting together a cohesive story. They went way too quickly from Hal's induction to Parralax, which is a pivotal time for him. I would have rather had the main focus be put on Hector Hammond, who was a wasted villain in the movie.

Really, all in all, the film was a middle of the road for someone like me who enjoys the character. But I don't think it's really a film that's going to appeal to most anybody else.

But the climactic scene at the ending, during the credits, got me REALLY excited for a sequel. And I just hope the studios learn from the first film and then make that second film.

Monday, June 13, 2011

"Stay Dead: A Novel Of Survival" by Steve Wands (review)

(the following review is one I posted on Amazon and Barnes & Noble for the book)

The scene was set in his previous work, "Stay Dead: The Stranger And Tunnel Rats". With this book, the author takes us back to life around Titan City and marshalls us into new territory in the zombie mythos. We're introduced to several sets of characters whose lives and pushed slowly, and painfully if not downright deadly for several of them, towards an ending that leaves the reader waiting for the next chapter.

I appreciate Wands' writing style and his approach to the material. He has no character that's too good (or too bad) to die. He paints a mental picture with his words that lets you easily imagine a world where people struggle just to get through the day, and night, without giving in to their despair at a world gone dead. His writing helps you connect with the heroes, and even the anti-heroes, and you feel for them. Whether it's a father looking past his children at a distant explosion or brothers trying to find their mother amidst the chaos of a zombie attack, you can't help but be touched by what they're going through.

And the best part (which I wish I could talk about here) is the backbone of the franchise; i.e., what has created the zombie apocalypse. Most (including myself) don't want to know WHY. Just give us the situation and let us go with it. But Wands gives us not an explanation but a hint, a teaser, as to why the dead walk. Even if he never goes beyond that one piece, he has given the reader enough to keep us coming back to find out more.

I have a scale on my podcast of rating things as a buy, a borrow or a don't bother. And I rate "Stay Dead: A Story Of Survival" as a borrow if not an outright buy. You want to read this. And, if you're like me, you'll want to read it agai

The True Battery Life Of The Kindle...

Back in episode 83 of my podcast I did a comparison of the Amazon Kindle, the Barnes & Noble Nook and the Sony eReader. And one of the big points that made the Kindle the hands down winner for me was the battery life. The advertised battery life was given as 30 days with the network turned off, which is double the life of the Nook is advertised as doing about two weeks.

So as an experiment, I decided in April to test this. I charged my Kindle overnight on 30 April, then packed away the USB cable and chose not to use it again until the battery was drained. All updates to it were sent via wireless (I don't have the 3G, so all downloads were over wifi) and I did send several PDFs to the device.

I also reset my Kindle at one point to clean things up when I realized I had a bunch of PDFs that I no longer needed on the device. I then redownloaded all of my Kindle books to the device.

As of today, 13 June 2011, I finally got the error message that the battery was too low and needed to be charged. That's 44 days on the one charge.

Again, I feel absolutely happy in choosing the Kindle, and love my wife for having given it to me for Christmas this past year.

Wednesday, June 8, 2011

Reporting phishing attempts to Google FTL!

This morning I opened what, to a less skeptical eye, might look like an absolutely valid attempt by Google to verify my Gmail account. It claimed that Google is attempting to eliminate all unused Gmail accounts1 and needed me to simply verify my details.

Now, looking at the link in the account it does go back to Google (to a hosted site on google.com) and uses a spreadsheet to collect the user's login name, password and date of birth. All things which Google would not require to verify your account.

But something a scammer would definitely like to get out of you since it would give them access to your email to look for bank mails, etc.

And that information coupled with your date of birth can give them a plethora of information to steal your identity.

So I tried to report this phishing attempt to Google. And (here's the where Google humps the fail whale) could not find any way of REPORTING a phishing attempt to Google.

Now, you might be saying:

Why didn't you click on the link to report the attempt on gmail.com?

The simple answer is: I don't use the gmail.com site to read email. I do it all through Thunderbird since I track three separate accounts.

So when I hit an email like this then I want to report it. But I can't really do that if Google doesn't give (even in the Google search results) a way to report an email phishing attempt.


1 I would expect that a legitimate attempt at doing this by Google would simply used the date last accessed to determine what accounts have aged out from lack of use and put them on a list to be deleted. Then they could just email the user a notice and, if that email were not opened after a set period, safely delete the account.

Wednesday, June 1, 2011

GFGI

I do hereby claim Founder's Rights to the phrase "GFGI", which stands for:

GO FUCKING GOOGLE IT

I'd like to thank my cube mate, Mike H., for being the inspiration.

Sunday, May 29, 2011

Frustrated with migrating to Rails 3...

One of the biggest frustrations with mixing dependency management and packaging in Linux is when an upgrade to one forces an upgrade to the other. Or more specifically, when upgrading to a more recent release of Fedora forces an upgrade to the version of Rails installed.

With Fedora 14 we had Rails 2.3.8. Now, with Fedora 15, we have Rails 3.0.5. And, sadly, it's not a simple task to just upgrade a Rails app since quite a bit has changed between 2.3 and 3.0. Enough that it almost feels like it would make more sense to just start over the project by creating a new Rails app and then moving the existing controllers, models, etc. over to the new app.

But that just sounds like a copout. There has to be a way to easily migrate from 2.3 to 3.0.

So guess what I'm doing this weekend?

Wednesday, May 25, 2011

Fedora 15, Gnome 3 and closing your laptop's lid...

I upgraded my system today from Fedora 14 to 15 using the preupgrade tool. Everything went VERY smoothly. My nVidia card (Quadro FX 880M) is completely supported with full hardware acceleration, which is awesome!

But the first thing I noticed once I booted into Gnome 3 was that, when I closed my laptop lid, the system went into suspend mode. Not what I want to have happen: unless I'm on battery, I want the system to just blank the screen when I close the lid.

I went into power settings and could not find a setting for what to do when the lid is closed. Something that was very easy to find in past, but now is not obvious.

So a quick google search turned up this solution:

To set the action when the laptop lid is closed, simply enter the following commandline:

gsettings set org.gnome.settings-daemon.plugins.power lid-close-MODE-action “ACTION

where you replace MODE with either ac or battery depending on which mode you're setting, and replace ACTION with blank, suspend, hibernate or shutdown.

Friday, May 20, 2011

Migrating a Subversion repository to Git...

For years now I've been using Subversion to maintain history on a lot of documents for me (my resume, my writings, various configuration files for mutt and offlineimap, etc...).

I love Git, though.

So I decided to migrate my Subversion repo to Git. And I wanted to keep all of my history. And I leaned heavily on John Albin's work to do it.

Step 1: Map Subversion users to Git users.

The key goal here is to keep the history intact, which includes the names used (though in this case it's always me, but it's the principle, right?). To do that I created a text file that maps the Subversion account to an email account.

svn log -q | awk -F '|' '/^r/ {sub("^ ", "", $2); sub(" $", "", $2); print $2" = "$2" <"$2">"}' | sort -u > users.txt

Now edit this file, which looks like:

mcpierce = mcpierce

and change it like this:

mcpierce = Darryl L. Pierce

Step 2: Clone the Subversion repo using Git.

This step is the crucial one, since it's here that we'll take all of that history out of Subversion and put it into a brand new Git repository.

Well, WE won't. Git will.

git svn clone [my Subversion repository] -A users.txt ~/temp

This took a while since it's going to migrate that history into the new repository. But, when it completes, you should be able to go to the repository and see all of that history.


cd ~/temp
git log

commit 50f06a699973bcd954fb1e61dc50deafee6a085f
Author: Darryl L. Pierce
Date: Fri May 20 14:13:43 2011 +0000

Updated my email configurations for home and work.

offlineimap can pull both work and home email down.

esmtp can send both work and home email.


git-svn-id: https://mcpierce.dyndns.org/repos/home/personal@659 7533b618-34f6-46d8-b546-5fbeb33e39a2

commit 00fb86a7314ec754108e0931d59a30464dce8510
Author: Darryl L. Pierce
Date: Thu May 19 19:47:59 2011 +0000
...


and you should see all of the commit author's names properly mapped over.

Step 3: Create the new remote Git repo and push the content

This step I'm not going to describe in a lot of detail. Specifically, I'm not going to tell you how to create a remote repository and access it. Instead, that's an exercise for you.

But, one you've created that remote repo and can access it, simple go to your new local repo and add that remote one and push your changes:

cd ~/temp
git remote add home [the remote git repository]
git fetch home
git push home HEAD:master


Now your Subversion repository history is in that remote repository.

Tuesday, May 3, 2011

Getting Picasa To Work On Fedora 14 (x86_64)

My cubemate, Mike, just came back from a two week European vacation and work trip. He was showing me photos from Vienna and Brno on his laptop using Google's Picasa photo library. I also keep my photos on Picasa but use our home Windows 7 machine to do the synching just out of habit.

But after seeing his pictures I decided to install Google's Picasa Linux build on my laptop running Fedora 14 64bit.

The first issue I had was with the only version available being 32bit. I'm not sure why software companies are still pushing only 32bit versions when 64bit is available and growing in adoption. But that's another story.

So after downloading and install the Picasa RPM (available here) the software didn't start but had some errors:

(mcpierce@mcpierce-laptop:~)$ picasa
/usr/bin/picasa: line 139: 12417 Segmentation fault      (core dumped) "$PIC_BINDIR"/wrapper check_dir.exe.so
/usr/bin/picasa: line 175: 12528 Segmentation fault      (core dumped) "$PIC_BINDIR/wrapper" regedit /E $registry_export HKEY_USERS\\S-1-5-4\\Software\\Google\\Picasa\\Picasa2\\Preferences\\

So, obviously, out of the box this doesn't work. The build for Linux is actually the Windows build packaged up with Wine to run on Linux rather than a native build. So to solve the loading issues I had to dirty my nice 64bit environment by installing a series of 32bit packages and then copy some of those files over into the Picasa install location:

[root@mcpierce-laptop ~]# cp /usr/bin/wine-preloader /opt/google/picasa/3.0/wine/bin/wine-preloader
cp: overwrite `/opt/google/picasa/3.0/wine/bin/wine-preloader'? y

This fixes the segmentation faults that popped up initially and allows Picasa to actually finish loading under Wine.

The next problem is that the Wine implementation Picasa installs was unable to access the internet so that I could download my existing albums from Picasa. To fix that I had to replace the wininet.dll.so file that Picasa installed.

[root@mcpierce-laptop ~]# cp /usr/lib/wine/wininet.dll.so /opt/google/picasa/3.0/wine/lib/wine/wininet.dll.so
cp: overwrite `/opt/google/picasa/3.0/wine/lib/wine/wininet.dll.so'? y

Once that was done then Picasa was able to login as me and synch my web albums to my laptop.

So now I can keep my pictures on my Linux laptop. One less reason to have a Windows machine at home.

But, still I have one more thing to say:

HEY! GOOGLE! How about a NATIVE LINUX VERSION of Picasa? If you support free and open source software then indulge us with an open source copy of your free tool!

Monday, April 25, 2011

Fedora, KDE and no sound (FIX)...

I've, for a long time, been using Gnome as my desktop environment. But recently I decided to switch over to KDE for a few different reasons.

A part of my regular work environment is to have Sirius running in the background so I can listen to Howard Stern while I GSD1. And to do that I run the Sipie app.

But when I switched over to KDE the audio immediately died. So I tried to run the (horrible) Sirius web-based application and that failed to play audio either. This was a clue to what was the problem. After trying to play a YouTube video and getting only video I knew it was the audio setup for KDE.

I started up the System Settings app and navigated to the Multimedia settings and clicked on the Phonon tab. All of the audio devices were "High Definition Audio Controller Digital Stereo (HDMI)" which, of course, isn't how I'm using my laptop normally.

After switching it over to "Internal Audio Analog Stereo" I still didn't get any sound. So one more check was necessary.

The problem was that, by default, the phonon backend installed on Fedora is phonon-backend-xine. But for using mplayer (on which Sipie depends) you need to have the phonon-backend-gstreamer package installed.

So I installed that package and voila! Howard's playing, sounds are good and I'm back to work.

1 Get Shit Done

Wednesday, April 20, 2011

Sping 2011 Has Now Sprung (Sprang)...

Today I woke up to a 70F (21C) morning. And my beard was itchy, as it is every year when the temperature, and more importantly the humidity, go up here in NC.

So it was time for the winter coat to go away. Buzzed it with my trimmer then shaved it off.

Then it was time to unclog the drain....

Monday, April 18, 2011

Birthday parties...

Rachel with one of her gifts: "Big Green Lizard"
My daughter, Rachel, turned 5 this weekend. For her birthday we had a part at the Jump Zone! here in town and invited her friends from preschool. They all had a great time jumping, eating cake and all.

Hard to believe it's been five years since she was born. My wife and I had tried so hard for a few years to have a third child. And we were both pretty much discouraged and thought it was never going to happen. Fertility treatments, needles, hoping and never sure what would happen.

It was almost six years ago that we went to Myrtle Beach for vacation and she surprised me with the news. We were floating around the lazy river at our hotel in innertubes when she grabbed my hand and pulled me over to her.

"I'm pregnant."

I was exhilarated!

We waited to tell her parents until we were sure things were okay. When we were convinced, and after we found out we were having a girl (which stunned us both), we broke the news to our family and friends.

Christene told her parents by laying out a onesie on our bed that said "I was worth the wait" and then letting her mother see it during a visit. I first told my buddy Mario after flying up to Philly for Labor Day Weekend.

Damn, five years have passed and it feels like only yesterday. Yet, at the same time, I can't remember what life was like without my Little Sweety Baby.

Daddy loves you, Rachel!

Sunday, April 10, 2011

Chapter one of my attempt at writing a novel...

Here's chapter one of my as-yet-unnamed novel I'm attempting to write. Would you read the rest of the book based on this?



The Discovery

It was cold. It was dimly lit. It smelled of antiseptic and cleaning solution and, under all of that, decay. It was a laboratory that held all kinds of memories for Dr. Theodore Sullivan; memories of struggle, of hopes dashed against the rocks, of frustration and confusion and anger and the desire to just give up and walk away.
But, right now, it was the most hope-filled room in the world to him.


Dr. Sullivan called out from the makeshift lab. “Rose, can you come here for a minute?”
Rosemarie Fuller, who had been a physician's assistant before the end of the world, entered the room. She found Dr. Sullivan peering into a high-powered microscope, a set of slides on the table beside him. “What is it, Ted?” she asked as she approached him.
“Tell me what you see in this sequence.” He then replaced the slide on the stage and allowed Rose to take his place at the table. She leaned down, peering into the stereo microscope.
“I see several cells which show signs of infection,” Rose looked up from the microscope. “Is there something else I should be seeing?”
“Patience, Rose,” Dr. Sullivan replied. “Now take a look at this one.” He raised the microscope's objective lens, removed the slide and, after checking the next one, placed it on the stage. “Take a look at this one and tell me what you see.”
Rose again leaned down. After a minute of adjusting and looking, she again stood up. “This looks like a slightly less infected cell cluster. What are you getting at, Ted? Just tell me, already, because I've got work to do.”
Despite her having come to work in the labs in the Pisgah Safe Zone, Rose wasn't big on mysteries. Her job was to help find a cure, not spend time on understanding its origins. That was a mystery for people with the luxury of time. And with the amount of work to be done, the last thing she felt like doing was playing twenty questions with Ted.
“Damn, you're in a mood today,” Dr. Sullivan replied sarcastically, trying to lighten the mood. He again raised the objective and swapped out the slide, this time with the one from the bottom of his stack.
“There. Take a look at this one and tell me what you see.”
As Rose leaned in, Dr. Sullivan took the other slides and put a few of them on the stages of the other three microscopes in the room and then peered into each.
He'd barely adjusted the focus on the second one when Rose stood up. A cautious look on her face gave a hint of what she was thinking.
A little formality crept into her voice. “Are these slides out of order, doctor?” she asked Sullivan.
“No, they're not. The first one was taken from test subject 129G prior to injection with formula AV717. The other slides were taken at 20 minute intervals for several hours afterward.”
The look of caution stayed on her face. But her eyes started to fill with tears, betraying her hope.
“Rose, I think we've found the cure.”


Dr. Sullivan and his team weren't the best at what they were doing. Hell, more than half of them weren't even experts in the field of virology. They were a team put together due to circumstances. Sullivan had experience in the field which is why he lead them. The others were statisticians, biochemists, nurses and from other fields.
But the end of the world doesn't give you the luxury of putting together an A-Team in order to find a cure. Instead you have to work with what's available, in more ways than one. The team wasn't the best or the brightest. The facilities weren't state of the art. Instead they were the ones that survived using what was left behind. Their ability to collaborate with other teams was limited. But it at least enabled them to work, which is what mattered most.
And work they did. Day and night. For years. Since the end of the world came to stay. And their work was finally going to pay off, they hoped. The human race was losing the battle against an enemy whose ranks grew with every one of their deathes. And it wouldn't be too many more years before the enemy had achieves total victory.


The pressure to find a cure is what drove Dr. Sullivan and his team. And it's what drove a few of them over the edge. Sure, they were all survivors of some sort. But that survival didn't give everybody the ability to endure the aftermath.
But not everybody who survives a tragedy wants to live with that victory. Sometimes it's the sense of loss of those who died. Sometimes it's the guilt over those who could have been saved. It's the could haves, should haves and would haves that can lead a person who was lucky enough to survive the first wave to end their lucky streak at some other point.
Dr. Sullivan had lost three of his research colleagues in the intervening seven years since the plague. All of them had lost their will to continue or saw their work as ultimately futile. Especially when it was discovered that everybody, except for the rare few, had already been infected. The enemy had infiltrated the world and was slowly and relentlessly turning the tide in its own favor.
It was one of those three that ultimately lead the team to their first big breakthrough. And it was that event that helped them to isolate the means by which the virus restarted the basic systems of the body. Like a small operating system image, it was able to get certain parts of the body running again. Just enough to allow the virus to propagate itself.
That discovery lead to a rapid series of hypotheses and experiments that proved successful in not only breaking down and understanding the virus, but also in finding a cure for it. A cure that not only stopped the virus from working on the dead, but also from affecting the living. A cure that could bring humanity back from the brink.
Now they just needed to find a way to mass produce and distribute the cure. It was indeed a breakthrough, but they needed to act to use it before it was too late.

Tuesday, April 5, 2011

The Physics Of Flopping

My son, who weighs around 59 kg (~130 lb), flops onto his bed from a distance of about 0.5 meters above it. From standing to sitting he has accelerated to approximately 4.9 m/s by the time he hits the mattress, assuming it takes him about 0.5 seconds to go from standing to sitting. At the moment when he hits the mattress the force he's applying to it and the frame is about 289.1 kg*m/s. He's had this bed for about 5 years now, or about 1,931 days.

So if we average out flopping down about once per day for that time period, and average out his weight from then until now (say, about 40 kg as the average), then we have:


1,931 days * 40 kg * 0.5 s * 9.8  m/s2  =  378,476 kg*m/s of force over that time period!

That's the equivalent of a 909 kg (1 ton) car traveling at 1,499 km/h (~931 miles/hour), just amortized over five years.

Is it any wonder that he broke the bed's frame this evening?

Monday, March 28, 2011

Easily Updating A Release With Fedora...

At home I have a perimeter server that runs Fedora. It's a machine I rarely think about, sitting on the fringe of my home network. I never SSH into it and instead only hit it for Subversion or Git repositories I have there for personal use.

So today I was working and, for some reason, decided to go an update the installed packages and do some general housekeeping on the box. And while on I realized it was a release behind, running F13 while F14 is the latest (and F15 just went to Alpha).

I decided to upgrade it to F14 with an in-place upgrade using the following steps.

First things first. Go to multiuser mode and disable X:

Ctrl-Alt-F2

init 3 

Install rpmconf (if you don't already have it):

yum install rpmconf -y

Clean up any configuration files with:

rpmconf -a --frontend=vimdiff

Upgrade the SSL certificates on F13:

rpm --import https://fedoraproject.org/static/97A1071F.txt

Update Yum itself:

yum upgrade yum

Reset all yum cached package lists:

yum clean all

Now do a full distro sync:

yum --releasever=14 distro-sync

This is the longest part of the whole process since it's going to pull down every single upgrade for your system. For me it was on the order of about 900 packages.

When that finishes, you want to then make sure that all Base group packages are installed:

yum groupupdate Base

Now prepare your system for a full reboot:

/sbin/grub-install BOOTDEVICE
cd /etc/rc.d/init.d; for f in *; do /sbin/chkconfig $f resetpriorities; done
package-cleanup --orphans

Now you can reboot your system and have a fully working, running F14 (in this case) system.

Sunday, March 27, 2011

Straight But Not Narrow...

I saw a video this morning that brought back some memories.


I had a pin that I'd bought years before that which read "Straight But Not Narrow". The pin was always on my old denim jacket at the time and I wish I knew what had happened to it. It probably went the way of that denim jacket and was donated.

The message, though, hasn't changed in all these years. I'm straight, but I'm not narrow minded.

Saturday, March 12, 2011

Should A Unit Test Contain Only One Assertion?

In a previous post I wrote about questioning grades and a unit test assignment for a class I'm taking this semester. In this post I'm going to write about one of the items mentioned in the feedback I received, which was:

Test methods should contain only one assertion. This way, if a test fails then the program will be more helpful to say exactly which assertion failed (and therefore what the circumstances were that caused the failure).
I completely reject this assertion for two reasons: 1) a well written unit test framework should tell you exactly which assertion failed when there are more than one in a unit test and 2) having a single assertion per test makes the tests more fragile and increases the likelihood that the tests themselves will be flawed.

The first reason is fairly obvious. Since all runtime environments provide stack traces on exceptions, there's no reason why a unit testing framework can't display one on a failed assertion. So the framework can easily tell you which assertion within a single test failed. Therefore there's no reason to hav ea single assertion in a test since the stack trace can take you directly to the failure. And, additionally, mode IDEs will let you jump directly to the line so you can verify the failure and fix the cause.

So a single assertion per test method is not more helpful in finding the cause of a failure.

The second reason is the more serious of the two since I believe it leads to bad coding practices. That idea that unit tests should be limited to a single assertion per test assumes that a unit test is only verifying one specific post-condition. Take, for example, an object cloning operation:


@Test
public void testClone() {
    Farkle source = new Farkle();
    Farkle result = f1.clone();

    assertNotNull(result);
    assertEquals(source, result);
}

In this example there are two assertions being performed: the first one ensures that an object was returned by the clone() method and the second ensures that the returned object fulfills the public contract for the clone() method; i.e., the object returned is identical to the original.

Now, if we were to follow the "one assertion per test" tenet then the above would have to be split into two unit tests:

public void testCloneReturnsAndObject()
public void testCloneReturnsEqualObject()

We've now doubled the number of test methods. However, we haven't done anything more than that. Sure, it can be argued that we have made the tests more specific and that that is itself a good thing. But we are now also required to setup the same test preconditions twice since both tests have to instantiate the same source and run the same method to produce the same post-condition only to test different aspects separately. This violates the concept of DRY and as such is a bad practice.

"Well, why not move the initialization into your setUp() method?"

The answer is this: setUp() (a method specific to JUnit, but each testing framework has its own analog) isn't supposed to setup every single test's preconditions. It's role is to setup the general preconditions for the entire test case itself..

To make setUp do the work for every single test would make it very inefficient. Since it is run before every one of the unit test in the class (in JUnit the setUp() method, or any method annotated with @Before, is called before every single unit test method is invoked). So that's a LOT of wasted set up work if we were to move all preconditions out of the tests themselves.

Additionally, by moving the preconditions out of the test methods we hide what the expected state is for those tests. Now you would have to go to the setup method to see details that should be right there in the test, making it harder than necessary to debug when code stops passing tests. Something easily avoided by keeping the precondition code in the test itself.

So now that I've defined why the preconditions need to stay in the test method itself, that leads us to the final reason why a single assertion per test method is a Bad Thing (tm): the test code itself could easily become invalid.

Imagine a slightly more complicated test, such as one that verifies five different aspects of the result of a method call.This also violates the DRY principle since we're doing the same setup in five different places. We also run the risk of the tests themselves changing as, with refactoring, one test will get updated while another may not. And down the road those differences can result in false negatives or, worse yet, false positives.

Instead, if we keep only the number of assertions as necessary in a unit test, we make the test themselves more expressive, by showing expected preconditions, and more targeted, by checking for expected outcomes.

Requiring tests to contain only a single assertion is a bad practice that leads to writing more code than necessary and either repeating yourself or else writing inefficient, and potentially ineffective, tests.

Always Question Your Grades...

Christene and I always tell our kids to go over tests and make sure the grade you received is the one your deserved. Teachers aren't always perfect and they do make mistakes. And it's not an insult to them to question why points are taken off for a problem: if it was valid then you can learn from it, and if it was invalid you can get it fixed.

This semester I'm taking a distance Java course at State. I posted about our first project a few weeks ago. The projects are split into two parts each: one part where you write the functional code, the other where you write unit tests to validate the code. As a professional developer, I've been using test driven development methodologies for years now. So when I sat down to do this assignment I wrote all of the tests as I wrote the code and ensured my code was at 100% test coverage and accurate.

On the functional portion I received 100%.

But on the unit test portion I only received 86%. So I emailed the TA to ask why I lost 14 points. In the feedback on the assignment it said that two of my 19 tests had failed, and I lost 7 of 10 points for each. But in my development environment no tests fail. I ran them in both Eclipse and also from the command line (the project is managed using Maven) and in both cases everything is perfect.

I'm just not sure how my tests could have failed if the code were working correctly. If anything, the tests ought to only work then if the code were flawed, which it's not.

I haven't heard anything back yet (I only emailed him a little while ago) but I'll post here whatever feedback I get.

Friday, March 4, 2011

Advanced Placement Night

After work last night, Caleb and I drove down to his high school for Advanced Placement Night. He's already taking honors classes and getting straight As, so both the school and Christene and I feel he would benefit from the AP classes.

The presentation was fairly straight forward. Mr. Reid gave a presentation about what the AP courses are and what students should expect, both out of the class and to put into the class.

Students will likely have a few hours additional work per day and required summer projects. But the benefits, college credits during high school, make it worth while.

After the presentation, Caleb and I visit the tables for the AP classes in which he's initially interested (US History, Chemistry, Biology, Calculus and Physics) to get their descriptions. The program also provided him with a road map of what honors classes he'll need to take prior to each of the classes. Then we sat and read, then discussed, the course descriptions.

From what I read it appears that the AP Physics class would cover a first semester Physics class, AP Chemistry would be Chem 1 & 2, AP Calculus would be a full Calculus program, and AP Biology would cover BIO 1 and most of BIO 2.

He's going to be in a good position to enter university given what's available, with most of his first year courses out of the way via AP classes.