Computers

NginX Upgraded In Place

Send to Kindle

NginX 0.7.2 is now available. I was running an ancient version, 0.7.1 (just kidding). 😉

So, I downloaded and built 0.7.2. Then I followed the instructions at the bottom of this page, which explain how to upgrade on the fly. It’s even cooler than just not losing any ongoing connections (which would be cool enough!). It also can be rolled back just as easily, if the new server proves to be faulty.

This is just too cool! Anyway, welcome NginX 0.7.2, RIP NginX 0.7.1.

WordPress Ate My Posting Date

Send to Kindle

This post is really a test, but I’ll share the full problem and solution (assuming the test works), so it might help someone else out there as well. 🙂

The other day, for the first time ever, I noticed that a post of mine hit my RSS reader (Thunderbird) with a date of 11/29/1999.  I was busy with other things and decided to come back to it. Of course, I didn’t…

The next time I posted, the date was correct again. I (foolishly) assumed that the problem was transient (and possibly corrected). A few more good posts, then yesterday, back-to-back bad posts. I realized exactly what was different between the good posts and the bad ones. At least procrastinating paid off this time. 🙂

All of my good posts were made with Windows Live Writer locally on my laptop and then saved to the server as a draft for continued editing. All of my bad posts were started and finished directly in the web interface of WordPress 2.5.1.

This certainly wasn’t a native 2.5.1 problem, because I’ve been running 2.5.1 a lot longer than my first bad RSS pull and until reasonably recently, I posted 100% of the time through the web interface. I selected the last few rows from the wp_posts table and spotted the problem instantly.

The post_date_gmt had the following value in my three bad posts: 0000-00-00. A Google search revealed this forum thread discussing the problem. Some people seemed to think it was related to the installation of XCache (others thought that you also needed a separate plugin installed to provoke the problem). Since I recently installed XCache (as reported in my NginX post), I was more than willing to believe that this was my problem. Of course, I didn’t see how it worked when Windows Live Writer was posting, but it was certainly possible that it took a different code path…

There was a pointer to a patch in that thread. The person who posted the link also commented that he thought this was not the “right” fix, though he said it worked. I’m not sure why he felt that way. Aside from being innocuous (meaning, there was no way that this could hurt anything), it seemed like a reasonable fix and was already in the trunk for WordPress 2.5.2, so it also seemed safe to apply, given that it would soon be production WordPress anyway.

So, I downloaded and applied the patch. I won’t know until I click on Publish whether it will work, as the post_date_gmt column correctly remains at 0000-00-00 while I’m in Draft mode. If it works, I won’t update this post. If it fails, I’ll come back and add another paragraph to let you know it didn’t work…

Update: I decided to add this bit quickly. It worked (so that’s not why I’m back here). I thought of one more possibility for the cause, which has nothing to do with XCache. Recently, I noticed that my posting times were in EST, not EDT (meaning, the one hour time-zone change didn’t happen automatically). I went into the WP Admin interface and adjusted the setting to be GMT-4 instead of GMT-5. It’s possible that this change caused the problem as well, but I really have no idea…

Firefox 3.0 is Awesome

Send to Kindle

Yesterday, I took a (cheap) shot at the Firefox launch fiasco. I didn’t need to do it, but I don’t really regret it either. You can’t have it both ways. You can’t want to hype the hell out of the launch, and then be unprepared for it.

Anyway, even though the official launch was delayed by 76 minutes, the US suffered an outage that lasted at least 30 minutes longer.

It doesn’t matter. They are blowing the numbers off the screen and there are still more than four hours left to download Firefox.

As I predicted yesterday, I accounted for four legitimate downloads, two yesterday (for Lois and me) and two today for the ancient guest machines.

It took me a while to be sure that Firefox 3 was faster than Firefox 2, but I’m convinced it is now. There is one site that I visit typically once a day, that takes a very long time to paint/load. It loads substantially faster in Firefox 3.

That’s not why I titled this Firefox 3.0 is Awesome. If you know Lois, then you know that she’s effectively legally blind. We run her laptop at a bizarre resolution, and then still have to pump the DPI (dots per inch) up, so that everything is hideously large on her screen. I can’t even begin to tell you what that does to any reasonably complex web page.

Aside from the fact that very little of the page typically fits on Lois’ 17″ monitor, most complex web layouts overlap the text, including hyperlinks (for her), making it near impossible to click on some links (you simply can’t get the mouse over the link!), and many of the pages are nearly unreadable as well. This has been true (for years!) on both Firefox (all versions) and IE (all versions).

Enter Firefox 3.0! Lois’ default home page is Google News. The minute we launched Firefox 3.0, we saw the page laid out perfectly. Unfortunately, it was gigantic, so the scroll bars were there for horizontal and vertical scrolling. It was too large, so we went in to the preferences, and saw that her Font was set to 28! (I told you, we have had to tinker lots just to make her be able to see!)

We dropped the Font size to 18 and reloaded. It got a drop better, but not much. Another look at the preference panel and we realized that we needed to go into the Advanced settings. In there, we saw that Lois had a minimum set of 24. We changed that down to 18 as well, and the page now showed off perfectly for her.

Then we went to our bank site, where Lois always has trouble hitting certain links. It painted perfectly (albeit with scroll bars). We played with Ctrl-Minus (Ctrl -) and the page shrunk perfectly. Previously, that key combination might have shrunk the text, but the rest of the layout often got worse. Now, Ctrl-Minus and Ctrl-Plus (Ctrl +) work flawlessly. Yippee!

Finally, I didn’t care about most of the addons that weren’t Firefox 3.0 ready. One I really cared about, TinyURL Creator and one I like (but could live without) Linkification. Rather than disabling version checking (dangerous at best), I decided to hand-tweak these two since neither is sophisticated, and therefore unlikely to fail or cause harm.

In both cases, I found which directory they were installed in under the extensions directory in my Firefox Profile (this is on Windows). I then edited the install.rdf file, and found the max version number that they would run in, changing it from 2 to 3, and voila (after restarting Firefox), both work beautifully again.

Welcome Firefox 3.0, we’re both very glad to have you on board! 🙂

Firefox 3.0 Download Day

Send to Kindle

Well, it’s finally here. I was all set to do my part with helping them attain the nonsensical world record download numbers.

Both Lois and I use Firefox as our primary browsers, and I have an ancient laptop that I occasionally use in a hotel room that I was going to upgrade, and one other one that I sometimes use as a guest machine. So, I was about to legitimately download it four times (I hate all manners of ballot stuffing, so I wouldn’t download multiple times just to up the count!).

Lo and behold, 10am PDT came, and the Mozilla site is 100% unresponsive. Way to plan for a world record!

Fully 15 minutes later, when you would think they would have been aware of the problem and would have switched to Plan B (they did have a Plan B, no?!?), the site is still pathetically slow (at least it finally loads!), but amazingly, it still only offers the link to Firefox 2.0.0.14!

How embarrassing…

I’ll still download it when I can, because I love the product, and everything I’ve read about version 3.0 makes me want it, so I have no desire to punish them, or cut my nose off to spite my face, but this one definitely goes in the pile of monumental screw-ups…

Getting Very Tired of AlertThingy

Send to Kindle

I’ve been using AlertThingy for a while now. I like the concept a lot. It started out as an Adobe Air front-end to FriendFeed.

Let’s begin (briefly, at least brief by my normal standards) 😉 with FriendFeed itself. At a minimum, FriendFeed is a social media aggregator. You can have it create a feed for you, with many (currently 35!) different services (e.g., Twitter, blogs, Google Reader, iLike, Facebook, etc.). Now, if someone wants to truly follow you, they can become your friend in one place, and not have to have accounts and joint friendship with you on all of the different services that you belong to (or will in the future!).

Great idea, and for my limited use so far, pretty darn good execution as well. I’m definitely a FriendFeed fan.

There are a number of ways to consume FriendFeed, including just by visiting their website only when you want to see what your friends are up to. You can also get updates via email. Because FriendFeed is part of the wonderful trend of services that provide full APIs, you can also use other clients to access FriendFeed.

AlertThingy started out as one such client. While it has a nice UI (at least a reasonable one), one instant quibble is that I can’t find a way to quit the program (perhaps I’m just dense). I have to go to the tray, right click on it, and select “Exit AlertThingy”. Yuck. Also, on the settings page, I have checked off the Launch at Startup box, and yet, it continues to launch every time I start up Windows. 🙁

So, let’s start with the good. Instead of having to log on to the FriendFeed website, and refresh every once in a while, AlertThingy will instantly alert me to anything that the people I’m following have to say on any of their services. Cool. Perfect? No.

The first problem is that not everyone that I follow has a FriendFeed account (or I haven’t bothered to look for them there, etc.). So, in addition to following some people on FriendFeed, I still have to check (in any number of ways) Twitter (for example), for those people whose tweets I’m interested in. Of course, if I launch a Twitter client (Twhirl is fantastic, also written in Adobe Air), I’ll see the tweets I’m interested in. But, I’ll also get an alert in AlertThingy as well, for the same tweet, if the person is also connected to me on FriendFeed as well.

That’s not the end of the world, but it’s not pretty (or conducive to managing interruptions) either. Now multiply the problem for every service you check (beyond the one Twitter example above), and you can see that it could get out of hand quickly.

Of course, this is not an AlertThingy problem, just a social media proliferation one, where the FriendFeed aggregation is getting in the way, by duplicating things. Of course, in this particular instance, if everyone was on FriendFeed (well, at least everyone that I cared about), that one part of the problem wouldn’t exist.

Now on to my real complaint, and why I’m getting tired of AlertThingy. AlertThingy was not satisfied in solely being a front end to FriendFeed. Since other services have APIs too, they started (a while ago) offering direct Twitter integration (Flickr too, and probably more coming).

Sounds great at first. I don’t have to launch Twhirl any longer (for example), since now I can see tweets from non-FriendFeed people directly in AlertThingy. Of course, when you think about it, it’s not a great idea, because it becomes it’s own sort of FriendFeed, while still providing FriendFeed… Ah, so there will be duplicates for people who are on FriendFeed, no? No! AlertThingy cleverly (yes, italics are there for sarcasm) removes duplicates (at your request).

No, they do not do it cleverly. They do a few things wrong, including alerting you multiple times. First, you get an alert when the real tweet comes through. Then, you get another alert when the FriendFeed broadcast of the same tweet comes in, even though AlertThingy only shows it once. But, that’s not the real problem at all. When they de-dupe, they totally screw up the timestamp. One or the other message gets timestamped with Greenwich Mean Time rather than local time.

For me, that means that these alerts get sorted to the top. Then, a new alert comes in on FriendFeed only (say a Google Reader share), which never gets duped, so it has the correct timestamp. That will be sorted below the de-duped stuff. Possibly, even pages below, since I’m five hours behind GMT!

This one super-annoyance is maddeningly easy to fix, and I sent feedback to the author on his site (and never heard back). All one needs to do is never accept a timestamp in the future (dupe or otherwise). If a message is about to get timestamped (and sorted), it should be the lesser of now and whatever timestamp the message claims it is. Simple.

So, what happens to me is that AlertThingy makes a noise and flashes, and then I have to look through a very long list of alerts to see which one is new, and it might be 20 down from the top. I can’t stand it any longer…

What makes it really bad for me is that I follow one particular person who is a prolific social media person (it’s part of his job, so I’m not blaming him). Today alone, he has created 43 separate alerts that are still visible in AlertThingy (others have already scrolled off the bottom and have been archived). It’s not only 43 new things. When he blogs, I get a blog alert, a tweet pointing to his blog, a Tumblr alert, etc. That’s a FriendFeed problem, as he needs to ensure that people see his stuff no matter where he posts it, and he too can’t be sure those people know about his FriendFeed.

He’s not uninteresting, and he’s one of the nicest people I know. That said, I’m dying to unsubscribe from him, but I don’t want to hurt his feelings. Since he’ll likely read this (and know who he is), perhaps I can get away with unsubscribing now. Especially today, since he started using Digg beyond belief, and his FriendFeed automatically picks them up. His acceleration of alerts is killing me.

So, I’ll probably both ubsubscribe from him, and also stop using AlertThingy, and start checking FriendFeed on the web (less frequently) or via a once a day email. I’ll hear about things later than I would have (other than tweets, which I’ll likely still follow via Twhirl), but my sanity will return…

NginX Reporting for Duty

Send to Kindle

Last week, my friend Jamie Thingelstad tweeted from the Rails Conference that he was considering switching to NginX (I’ll probably drop the caps starting now) after sitting in on a session about it.

Prior to that mention, I had only heard of it once, when an internal tech email at Zope Corporation mentioned that one of our partners was happy with it as their web server. I didn’t pay too much attention to it at the time…

I was curious, so I downloaded and built nginx-0.7.1. Since there is no equivalent to mod_php for nginx, I also had to decide on a strategy for managing PHP via FastCGI for this blog (I use Zope to serve up all other publicly available content on the site). It seemed that the most prevalent solution out there was to use spawn-fcgi, part of the lighttpd web server package (another very popular alternative to Apache, which is what I was using).

After searching further, I noticed that a number of people were recommending a package called php-fpm (PHP FastCGI Process Manager). I decided that I’d give this one a try first.

Finally, while researching all of this, in particular noting results by WordPress users (after all, that was my one use case for doing this), I also decided to install an op-code cacher for the first time ever, settling on XCache (also brought to you by the fine folks who bring you Lighttpd!).

Within minutes I had nginx installed and running, serving content on an alternate port (so Apache was still serving all production content on port 80). Shortly thereafter I had php-fpm installed and running as well. XCache installed easily too, but it did take me a few minutes longer to correctly change one particular entry in the conf file.

I had some serious frustrations along the way, all of which were due to a fundamental lack of understanding on my part of how things are ultimately supposed to work (meaning, nothing was wrong with the software, just with my configuration of it!), but to underscore the bottom line, nginx/php-fpm/xcache are all in production now. If you’re reading this in your RSS reader, or in a browser, it was delivered to you by that combination of software (unless this is far in the future, and I’ve changed again). 😉

If you aren’t interested in the twists and turns of getting this into production, then read no further, as the above tells the story. If you care to know what pitfalls I encountered, and what I had to do to overcome them, read on…

Getting everything set up to generically serve content on the alternate port was trivial. Like I said earlier, I was serving test pages in minutes (even before downloading php-fpm and xcache). In that regard, you have to love the simplicity of nginx!

With extremely little effort, I was able to write a single line proxy_pass rule in nginx to serve up all of my Zope content on the alternate port. Done! I figured that this was going to be really easy, since the WordPress/PHP stuff has so many more real world examples documented. Oops, I was wrong.

To begin with, the main reason that I was wrong is due specifically to WordPress (hereinafter, WP), and not nginx or PHP whatsoever. Basically, I ran into the same problem the first time I fired up XAMPP on my laptop to run a local copy of my production WP installation.

WP stores the canonical name of blog in the database, and returns it on every request. This means that your browser automatically gets redirects, which at a minimum (in the nginx case) dropped the alternate port, and in the XAMPP case, replaced localhost with www.opticality.com. The fix for XAMPP was easy (yet still annoying!). I had to go into the MySQL database, and change all references to www.opticality.com to localhost.

In the case of nginx, I couldn’t do that, since I was testing on my live production server. In retrospect, this was stupid (and cost me hours of extra time), since I have a test server I could have installed it on, run it on port 80, change the blog name, etc., and been sure that I had everything right before copying over the config. Of course, I learned a ton more by keeping my feet to the fire, especially when my public site was hosed (which it was a number of times this weekend!), perhaps shortening the overall learning experience due to necessity! 🙁

That made testing WP extremely painful. Instead, I tested some standalone php scripts, and some other php-based systems that I have installed on the server, but don’t typically activate. All worked perfectly (including my ability to watch xcache return hits and misses from the op-code cache), so I was growing in my confidence.

Sometime late Wednesday night I was mentally prepared to cut over (since switching back to Apache would only take a few seconds anyway), but I decided to wait until the weekend, when traffic to this blog is nearly non-existent (come on folks, you can do something about that!). 😉

Saturday morning, very early, I stopped Apache, reloaded nginx with a configuration that pointed to port 80, and held my breath. I was amazed that everything seemed to be working perfectly, on the first shot! I moved on to other things.

Later in the day, I visited the admin interface for WP (Firefox is my default browser, though I obviously have IE installed as well, and for yucks, I also have Safari, which I very rarely launch). Oh oh, I had zero styling. Clearly, something was causing the CSS not to be applied! I quickly ran back to the retail interface and thankfully, that was still being served up correctly. That meant that I could take my time to fix this problem, as it only affected me!

I turned on debugging in nginx and saw that the CSS was indeed being sent to the browser. That’s not good either, as it seemed that this wasn’t going to be a simple one-line fix in nginx.

Yesterday morning, after nginx had been in production for 24 hours, I decided to check out IE for the admin pages. They showed correctly! Firefox was still showing zero styling. I compared the source downloads, and Firefox was larger, but had everything that IE had as well (including the CSS). At first I incorrectly assumed that perhaps it was the extra ie.css that IE was applying, and Firefox was ignoring, but that really couldn’t have been it, and I didn’t waste any time on that.

Now that I had a data point, I started Googling. It took me quite a while to find a relevant hit (which still surprises me!), and I had to get pretty creative in my search terms. Finally, one solitary forum hit for a php-based product called Etomite CMS (I had never heard of it before) describing exactly what was happening to me. Here’s the link to the forum post.

OK, their take was that the CSS was coming down with a MIME type of text/html, rather than the correct: text/css. I went back to Firefox, and did what I should have done at the beginning (my only excuse is that I’ve never had this type of use case to debug before!). I enabled Firebug on the admin page (I already had Firebug installed, but I’ve rarely ever needed it).

Immediately, I could see that the CSS had been downloaded (I knew it was sent from the nginx logs, but now I knew it was received and processed). I could also see instantly that the CSS was not recognized as CSS. Going to the Net tab in Firebug revealed the headers for each individual request, and indeed, all of the CSS came back with text/html as the type. So, IE ignores that, and applies the CSS anyway, because it sees the extension (I’m guessing), but Firefox doesn’t. Fair enough.

But, why is it coming back text/html? I checked the nginx MIME types file, and CSS was coded correctly (I didn’t touch that file anyway). Looking at the headers again, I spotted the problem. All of the CSS files were being returned by PHP, not statically by nginx. In other words, my nginx config was wrong (even though it was otherwise working), as it was passing requests for CSS files to the FastCGI PHP process, and getting back an incorrect MIME type for those files!

A very cursory search didn’t reveal how I would tell PHP to return a correct MIME type. I’ll save that investigation for another day, as it would have been an immediate, but suboptimal solution anyway, since these files can be served faster by nginx to begin with. That meant figuring out what was wrong with my nginx configuration.

The docs for nginx are pretty good (with a few exceptions, but I’m not quibbling). There are also a ton of cookbook examples, some good and detailed, some should be deleted until they are completed. I realized pretty quickly that even though my retail interface was working, I had less of an understanding of how the different stanzas in my configuration file were interacting (specifically, the location directives).

I really didn’t want to revert to Apache, so I decided to experiment on the live server. It was Sunday evening, and I figured traffic would be relatively light. Sparing you (only a drop, given how long this is already), I hosed the server multiple times. I could see that search bot requests were getting errors, and my tests were returning a variety of errors as well. At one point, I had nearly everything working, and then promptly screwed it up worse than before. 🙁

Eventually, under a lot of pressure (put on myself only by me), I simplified my configuration file, and it all started working, retail and admin alike. Whew. I’m now serving all static files directly from the file system, and all php files through the FastCGI process. The world is safe for blog readers again. 🙂

Like I said above, the problem was never with nginx, though I can’t say the same thing for PHP returning text/html for a CSS file. Clearly, PHP shouldn’t have to process those files, but if it does, it should do the right thing in setting the MIME type. Of course, if it had, I would likely have never noticed that my static files weren’t being served as I intended them!

One real complaint, aimed at nginx. Since I use the proxy_pass directive to serve up all Zope content (and like I said, that worked from the first shot, and has worked ever since!), I was quite frustrated by the limitations on where it can be used, and how. Specifically, it can’t be used within a regular expression based if test, even if you aren’t using variable substitution in the proxy_pass itself. That simply doesn’t make sense to me. And, now that I mentioned it, there are other rules about when/how you can use variable substitution as well.

Clearly, I got it working, so the above isn’t a giant complaint, it just eludes my current understanding. As for getting the rest to work, I have a better understanding than I did, but I’m sure that if I were to muck around a little more, which I’ll inevitably do, I’ll break it again, showing that my understanding is still at the beginner level.

In any case, for the moment, it’s doing what I intended, and is in production. 🙂

Aside from Jamie’s pointer, in doing the initial research, what got me excited was reading that WordPress.com had switched to nginx for their load balancing (and might eventually switch for their web serving as well), and that Fastmail is using nginx for the IMAP/POP mail proxying as well. Clearly, nginx is ready for prime time (my puny blog notwithstanding). 🙂

How Software Is Built Interview

Send to Kindle

A few weeks back, I was interviewed by two guys, Scott Swigart and Sean Campbell, who are doing a very interesting survey called How Software Is Built. The transcript of our session just went up last night. It’s long (as most of these are), but if you’re up for it, here’s the link.

Rube Goldberg and SSH

Send to Kindle

Rube Goldberg would be very proud of what you can accomplish with SSH. If you don’t know what SSH is, you really should just stop reading, as not only will this post be meaningless to you, you wouldn’t care about the result (or technique) even if you followed it perfectly! 😉

A while ago, I gave an ancient laptop to a good friend who was sharing her husband’s laptop while they lived in Princeton for a year (he just finished a fellowship there and they are returning home in a week). Given the age of the laptop, the amount of RAM, the size of the hard drive, etc., I was hoping my friend would be willing to live with Linux rather than Windows on the box.

She’s a Gmail user, so she was indifferent (obviously, having never tried Linux before), given that she basically lives in the browser most of the time she’s on the machine. I installed PCLinuxOS on it (the 2007 release) because it looks a bit more like Windows than some other popular Linux distros. Today, I would make a different choice, not that there’s anything wrong with PCLinuxOS.

Anyway, for the most part, it has worked out very well for her, and she’s felt connected to the world, especially when her husband was in the office, and there was no laptop at home. Unfortunately, there is a bug somewhere either in PCLinuxOS, in her hardware, in their Linksys router, or in the timing between all of them, because on occasion, when the laptop boots, it doesn’t get the correct DNS server info even when it gets a good IP address via DHCP.

In what has to be a very painful process for my non-techie friend, I have given her a series of a few commands to type into a terminal window to correct the problem, when it occurs. Of course, it’s entirely Greek to her (rightfully so), but it works, and she patiently types them away and it all magically starts working again.

On occasion, she’s had trouble getting all of the commands in correctly, and she feels guilty calling me, even though I keep telling her that I don’t mind helping whatsoever. That got me to thinking about how I could fix it permanently, without making her life even more miserable trying to talk her through the various things I’d like to try in order to be sure I found the right solution.

I don’t have time to visit Princeton this week, and soon, she’ll be back in Indiana, and I definitely won’t be visiting there in a while. So, I need to have remote access to her machine. I can’t just SSH into it, because I certainly don’t want to talk her through port forwarding on her Linksys router, nor do I want to leave that port permanently open. That cuts out a vanilla VNC connection as well (which would be overkill, but if it was available, would work as well).

So, I thought that perhaps I would try one of the web-based remote control services. I have had excellent success with the free service from Yuuguu.com when I help my Dad with his Windows machine. It works on PC’s and Mac’s, but apparently, not yet on Linux, even though it’s Java based. That was disappointing. A peek on a few others yielded similar results.

After scratching my head a bit, and searching the net a bit more, I came across a very simple solution, entirely via SSH, but with Rube Goldberg implications in that I was solving a very simple problem, with a built-in option of SSH, but jumping through tons of hoops to get to the point where the simple command could be issued.

The solution (tested by me, but not yet done with my friend, because I wanted to be sure before subjecting her!) is as follows:

I’m running Windows XP. I could run an SSH daemon there (in a number of ways), but since this is a temporary solution, which I don’t really want to think about, instead I fire up VMware Player and launch my new favorite mini-distro, CDLinux 0.6.1. It automatically fires up an SSH server.

I then poke a hole in my firewall (I didn’t need to talk myself through it either) 😉 with an arbitrary port number (for argument’s sake, let’s say it’s 12345). I forward that to port 22 on my CDLinux instance (running under VMware, and therefore having a different NAT’ed IP from my Windows box!). I can even leave the firewall in that state permanently if I want, since 99.9% of the time, CDLinux won’t even be running, and even if it was, and someone luckily got in, it’s a Live CD image, with nothing to really harm!

OK, we’re almost done! On the remote machine (my friend’s), she would type the following, in a terminal window:

ssh -l cdl -p 12345 -R 54321:localhost:22 the.name.of.my.remote.machine

She’ll get redirected to my VMware instance, and be prompted for a password, which I’ll give her in advance (can’t use ssh keys for this, since I don’t want to over-complicate this). Once she is in, I open a terminal window in my instance, and type:

ssh -l her_user_name -p 54321 localhost

Voila! I’ll now have a shell on her laptop, through an SSH tunnel, without her poking any holes in her firewall, and without me even needing to know her IP address, unless I want to restrict my SSH port forward to her specific machine, which would make this dance even more secure.

I’ve tried this a few times from different machines, all with success, but my friend isn’t online at the moment, so the final test will likely have to wait until tomorrow morning. In any event, a cool (and relatively simple solution) to an otherwise thorny problem. Just as a footnote, if I needed more control over her machine, the exact technique could be used to reverse tunnel a VNC port, giving me graphical control, or I could SSH back with -X (for X-Windows tunneling), and launch graphical clients one at a time, etc.

Update: OK, so today we got to try, and it worked perfectly. The only kink was that sshd was not automatically started on her laptop, so I had to talk her through becoming root and starting the service (simple enough!).

After we got it going, I did the unthinkable, and offered to upgrade her system. It was reasonably old, with Firefox at 2.0.0.7 (for example), and lots of other packages that could probably stand a security update. I warned her that it’s often better to leave these things alone when they are running smoothly, but in the end, we both decided to go for it.

So, I ran an apt-get upgrade. I then asked her to reboot. The machine came up, but only in terminal mode. She was able to log on, but startx failed with tons of errors. Oh oh…

Thankfully, the network did come up, and she was able to log on and run the ssh tunnel. I was then able to get back on her machine. I decided that instead of poking around too much, I’d try one last thing, which was to perform an apt-get dist-upgrade. This ran for a bit, and then I asked her to reboot again.

Voila! The machine came up correctly, and the networking worked again. So, for the moment, it seems that we accomplished everything we set out to do today, including her running Firefox 2.0.0.14 (I know, not 3.0 as yet…). Whew! 🙂

May 2008 Poker

Send to Kindle

Same old, same old, with some new stuff thrown in, to throw me/you off. 😉

Right after I reported on April’s results, I transferred some money from my one poker site to another site. The two sites have an agreement between them (perhaps there is joint ownership, but I have no idea) whereby you can transfer money between them with no cost, an unlimited amount of times.

I decided to do that because my beloved Omaha Hi-Lo tourneys have virtually disappeared from my primary site, and even the Hold’Em tourneys have become more of a circus side-show (IMHO). I actually prefer the software from my primary site, but I do enjoy playing Omaha Hi-Lo on the new site, so the blend is working for the moment.

That’s the new part. Now for the old…

Nothing has changed. I still played an extremely light schedule in May, so I often go days without launching the poker software. I also play mostly in actual tournaments rather than qualifiers (there are some exceptions, notably on weekends), so I’m spending more on the entry fees (this has been consistent for a couple of months now), which makes the volatility higher.

I started the month off with a bang, entering the Sunday big one by ponying up the full $215. I finished 54th and got back $600. In fact, I got rivered on that last hand (no, I don’t cry about losses) or I might have gone much further. At the other extreme, last weekend, I paid full freight again, and busted out on the first hand! That was a first for me. I flopped a set when my opponent flopped a straight. He played in a raised pot on the first hand with 79c, so he earned that pot by playing loosely.

I had a small roller coaster on the new site, starting off poorly, then cashing a few times to go ahead for the month, finally giving it all back and more. In the future, I won’t break out the results between the sites, but this one time, I will.

New site = -$164.00 for the month, old site = -$68.20 for the month. Grand total = -$232.20. Disappointing, but there were some real opportunities for big wins, so I’m really not complaining.

Yesterday I won an entry into today’s big one, so the loss counts in May, but I get to start off June with a free entry to the big one, so we’ll see what I make of the opportunity later today.

Amazon Unbox

Send to Kindle

I haven’t had an interest in any of the movie download services (until yesterday). First, we don’t watch that many movies. Second, we have so many DVDs that we own, and probably will never watch. Third, since we only have laptops, disk space can become an issue if the download is purchased and is meant to persist forever.

There are a few TV shows that we really like. A number more that we watch regularly but don’t care about as much. One of our favorites is NCIS (Navy Criminal Investigative Service). We liked it from the very first episode, but not without reservations.

The stories are compelling and extremely well written. The twists and turns are clever without being absurd. On the other hand, for too long (at least 3+ seasons) the banter between the unit (specifically, one of the male characters with any female) was so juvenile as to be completely unbelievable, especially in this type of unit in these types of situations.

It was so maddening that we often discussed dropping NCIS from our regular habits (as we’ve done with CSI, CSI: NY, Cold Case and many others, after watching them for years). We didn’t stop, because the stories themselves were probably the best on TV, week in and week out, with very few exceptions!

Thankfully, at least a season ago, they toned down some of the idiocy, without losing each character’s individuality. It could still be less, and be a better show, but it rarely grates on me too badly.

So, being a must watch show for us, I record it on three separate DVRs each week. It records in HD on my Verizon FiOS DVR, and in standard definition on my DirecTivo and in the apartment (where it could also be HD, but I preserve disk space on that DVR more carefully).

We’ve been away for a few weeks, working and enjoying our godchildren’s graduations. When we got to the house, I saw that the FiOS DVR was 99% full. That’s because it’s the only one that I allow HD recording on. It had three episodes of NCIS on it (4/29, 5/6 and 5/13). It also had three or four episodes of House M.D in HD on it. All of those episodes were duplicated on the DirecTivo.

I knew that the stuff that was scheduled to record that night (this past Monday) would wipe out unprotected older stuff, so I chose to proactively delete shows to make room. I couldn’t decide between NCIS and House. In the end, I decided to delete NCIS because some of the scenes in House can be all the more disgusting in HD. 😉

We then ended up coming to the apartment a few days earlier than expected. When we got here, I was reminded that on occasion, the DVR here (supplied by Time Warner Cable) locks up, and even though there is plenty of disk space, nothing gets recorded until I reboot it. That happened the week of 4/21 and I didn’t get to reboot it until 4/30, which meant that I missed NCIS on 4/29 on the apartment DVR.

No biggie (or so I thought) since I have it up at the house on the DirecTivo. But, it also meant that we wouldn’t watch the remaining NCIS that we have here, until we got back to the house. Being the clever guy that I am, I connected my laptop to the DirecTivo via my Sling Box. I also connected my S-Video cable from my laptop to my TV.

I fired up the DirecTivo, found the correct episode of NCIS and hit play. A second later, it prompted me to save or delete the episode. Huh? After doing that a second time, I went into the episode information screen, and saw that the duration was 0:00. Ugh, for whatever reason, it failed to record.

What to do? Well, cleverly, I went to cbs.com to see if they offered up streaming video of the episode. Indeed, NCIS is one of the shows that they offer full episode streaming for (not all, and I have no idea why!). Unfortunately, they only offered the last three episodes, all of which I have on two DVRs.

I can understand (somewhat) why they don’t offer all episodes for streaming, forever. That said, it seems silly to cut it off at three, and to make the current ones available, which supposedly have more of a premium value to them. Then again, I don’t make these decisions for anyone, including CBS.

Searching the net, I came up with NCIS episodes being available for sale on Amazon Unbox. Like I said in the introduction, I’ve never had an interest in this, or any like service. That said, I’ve been delighted with Amazon’s MP3 Download service, so I at least trusted this brand and believed that the experience wouldn’t annoy me.

We decided to spring for the $1.99 to fill in our missing episode. The application downloaded and installed quickly. The 856MB episode file took a little longer to download (roughly 30-40 minutes). That part would have been a lot quicker if we were at the house, with our FiOS service. 🙂

I still had the video cable in my laptop, connected to the TV. So, once the episode downloaded I was able to fire up their player and watch it on the TV instantly.

The quality was quite good. We thoroughly enjoyed the episode, In The Zone, and were glad to spend the $1.99 to not have a gap in our collective memories of this show.

While each episode easily stands alone, even if it makes reference to past events, character development is always a nice touch. This episode focused on a cast member that rarely gets on air time, Nikki Jardine. If she ever plays a more prominent role in the future, this would have been a really bad episode to miss. Of course, she might never be on again, so who knows. 😉

We’re back to normal now, and can catch up with the rest of the shows on the normal DVR. We also watched the episode of House through the Sling Box, that wasn’t recorded in the city either (due to the reboot problem), so we can now watch the rest of those as well.

A very long post, just to tell you that Amazon Unbox works well, as advertised. While I don’t anticipate using it often, it’s very nice to know that it’s there for any future emergencies, or even desires. 🙂