Saturday, May 16, 2009

Some Interesting Wolfram Alpha Queries.

As I approached Wolfram Alpha last night I was full of excitement.  Will it change the way I use the internet? Well, while Alpha is interesting, I was disappointed with it’s lack of ability to figure out what it was that I wanted.  However, the power is there.  Given some time to mature, I feel that it will be a very powerful tool.Searching around I wasen't able to find any lists of interesting queries.  The closest i've been able to find is this list of Easter Eggs. So, In order to better understand Alpha, I’ve been hunting for others’ queries.  Here are some I’ve found on Twitter:

@thinkhard : Cyclic Cellular Automata, 3-Color
I'm still not sure exactly what "rule 4594122302107" means or how it relates to the R/T/C/N values of the algorithm.
@marios : Interesting data on Cyprus 
I had been previously ignorant about the civilized nature of Cyprus. 
@maxtsai : Swine Flu Statistics 
Interesting to see that Alpha is being continually updated.
@oliverg : Caffine and Aspirin
“Has an LD50 for caffeine but none for Aspirin”
@cah28 : The US Use of Oil
I’m still trying to figure out how to get a plot of this from 1908 to 2008.

If you find other interesting examples of Wolfram Alpha queries, please don't hesitate to share in the comments section.

Sunday, March 22, 2009

Do Epic Shit

I love this post and it's followup Think Big, Act Small.

Doing Epic Shit is what we all want to do right? If only it wasn't so much work. So with that in mind, I would only add one thing to this: Do Epic Shit People Care About. I mean, if you're going to put the work in to Do Epic Shit, you'd probably be better off Making Some Shit Epic that people would care about.

Also let me suggest a companion phrase, Give Other People's Epic Shit Props Too. No one likes a self obsessed jerk.

Friday, March 13, 2009

FreeBSD 7 and the SiI 3114 SATA150 controller

Well, it turns out that the issue with my ZFS box was only that my SiI 3114 SATA card hard outdated firmware. FreeBSD 7 with ZFS has been running solid for a few days now while under moderate load from Samba.

I discoved the solution after having the exact same types of problems with my setup and a Debian software Raid-6 configuration. After a bit of googling for "SiI 3114 debian timeout" I found my answer. A firmware update and a FreeBSD reinstall later, everything was peachy.

I'll pick up where I left off with the media server part of the project soon.

Saturday, March 7, 2009

I'm sorry FreeBSD...I'm moving out and I'm taking the Raid with me

Well, at least I managed to figure out the issue with FreeBSD not booting. Unfortunately, I can't get it to stabilize with ZFS and it looks to be the fault of the ATA driver. It's sad but it seems like the community has lost much of it's luster of the last few years. My questions in the freenode irc channel fell on deaf ears. It appears I'm not alone, even in the newsgroups many well documented questions go unanswered unless very simple. Have all the helpful gurus moved to other operating systems?

And so I've moved to debain with md raid-6. It's not quite everything i'm dreaming of, but at least it works. Hopefully, in a couple of years when I build my next home raid, ZFS will be mature on a platform that will work on my old hardware. My home server is always my previous desktop.

Update: the problem is fixed and ZFS is running without issue.

Saturday, February 28, 2009

On Philosophy’s Great Experiment

...People asking for change for a dollar got a much better response outside a pleasant-smelling bakery than a neutral-smelling hardware store; unwitting subjects in an experiment who found a dime in a phone booth were far more ready to help someone pick up dropped papers than those who hadn’t had that tiny piece of good luck...

“Would you rather have people be helpful or not? It turns out that having little nice things happen to them is a much better way of making them helpful than spending a huge amount of energy on improving their characters.”

From: http://www.prospect-magazine.co.uk/article_details.php?id=10638

Is it amazing that context matters so much? I don't think it is. Human beings have often been described as the culmination of their memories and it seems obvious that memories are weighted by a combination of recentness and emotional impact. The paradigm shift here seems more to me to be about thinking in the broader context of situation instead of just memories. The term situation here should include genetics, conditioning, memories both short and long term, and current levels of neurotransmitters. All of which culminate in the current mental state of the individual.

Saturday, February 21, 2009

Lessons Learned in One Year of Technical Blogging

My first post on my Atalasoft blog was almost exactly one year ago (Feb 20th, 2008) and since then I’ve written over a post a week. It's been an interesting year in that, while I've had many blogs in the past, they had never had never gotten the kind of attention that my this has. I feel that big part of this relative success is in that I've actively tried to supply interesting content for a particular audience, instead of just writing about whichever projects or ideas were currently floating around in my head (that’s what this blog is for). In this post I'm going to go over some my successes and failures in an attempt to discover what has worked and what hasn't.


Top 5 Features Your Cellphone Should Already Have (February 20th, 2008)

This was my first attempt to write about something that I thought people would care about. I used the "Top X things" blogging pattern and tried to include a catchy title. By almost any measure, post was an overall failure. It sank like a brick on all of the aggregator websites I submitted it to and, in the end, no one outside my company read it.

Why didn't they care? In retrospect I would say three reasons:

  1. It was completely off topic. I work for a software company. No one had any reason to care about my opinions on cell phone features.
  2. It followed a pattern too closely. Patters are a good starting point but without your own style you can't stand out from the background noise.
  3. I didn't link to anyone else. This was a lesson I’m glad I learned early on. No man is an island, especially in the blogosphere.

At the same time, in this post were the roots of later success:

  1. I started blogging and found that I liked it for it’s own sake. The first step is always the hardest.
  2. I applied patterns that I inferred from other successful posts on the web.
  3. I started to develop my own style.


    Clojure Impressions (March 21st, 2008)

    I owe much of my passion in blogging to Rich Hickey and his Clojure presentation in Northampton, MA. This talk really brought home the power of functional programming, which I has previously considered to be for academic use only. I was also shocked in that I suddenly realized there was a whole other aspect to programming that I knew very little about: the properties of a language which make it more applicable to solving specific types of problems. Eventually this lead me to spend much more time reading about both functional and object oriented programming.

    There was also much success to be found in the content of the post itself as it was my first reaching over 1000 page views and it had my first comment from someone other than a coworker. Both of these things are of course due to me linking to, and being linked to from, other bloggers who went to the same event.

    How did this effect my style of blogging?

    1. I attained 1OI ignorance (I knew that I didn’t know) on the topic of the quantifiable properties of programming languages.
    2. I became much more interested in functional programming which in turn lead me to Microsoft’s new language: F#.
    3. I learned that success in blogging is heavily dependent on reciprocal linking.


    .NET Memory Management Series: Part 1, Part 2, Part 3, Part 4, and Part 5 (April 3rd – August 20th, 2008)

    This series changed significantly how I saw the utility of blogging. Some of my previous tutorial blogs had seen moderate success (my post on using NGen was my first post with over 2000 visitors), however this entire series of posts were all broadly viewed, some with over 5000 visits. This was also the first time I was linked to by a news source outside of the blogosphere, Channel 9. In this (still very moderate) success, I realized that posting well researched and useful tutorials about often misunderstood aspects of .NET (or any complex technology) could build a significant amount of recurrent readership.

    After this, I wrote several more tutorial style posts. I found that, while they were all well liked and received a significant number of hits, they took a great deal of time to research. I was always very careful to be sure I was posting correct information (or at least to specify unknowns), but often this research would take days. I had similarly big expectations for my newer Process in .NET series, but they are all on hold for now. Unfortunately, now after the economy crashed hard and I am feeling more pressure at work, I no longer find myself with time for these kinds of in-depth and well researched posts.

    What did I learn from this?

    1. Readers appreciate well researched information on useful but not well documented topics.
    2. Relating blogs by putting them in a series can build readership momentum.
    3. Well researched information takes large amounts of time to generate. However, the effort can be worth it in that it builds expertise.


    Why Are Our Programs Still Represented By Flat Files? (Jun 06, 2008)

    With well over 11000 visitors, this post has been by far my most successful. I was particularly proud that it, and the community discussion it caused, was presented afterward in an InfoQ news feed article by Sadek Drobi (a site which I respect greatly for it’s strict professionalism). This was the first time one of my posts was taken seriously by programming language intellectuals and it gave me confidence in future writings on the topic.

    The topic of this post was large inspired by playing with NDepend and it’s visualizations of .NET classes but also had much to do with much of the reading I had been doing after Rich Hickey’s Clojure presentation. I have since written several other posts along the same vein, my most successful of which was Why OO may not suck, or, Take a ride on the Falsus Omnibus. With just over 7000 hits, the majority of which were from programming reddit, it has been my most successful post in recent history.

    I learned quite a bit from these posts:

    1. It can be good to take a strong stance in order to facilitate discussion, even if you don't feel quite that strongly about the topic.
    2. Don't be afraid to challenge difficult ideas or big names as everyone else out there is just a person like you. You can get a lot of credit for pointing out the elephant in the room.
    3. The most topics which are the most fun to write about are usually also the most controversial. Don't hesitate to go for it if you feel you have something interesting to say.


    Additional Lessons Learned Outside of Specific Posts

    1. RSS Readership can drop very fast if you stop posting or change topics. I’ve seen two examples of this: The first was in my post-august slump in which I only posted a couple of times in one month. The second was in my move from mainly .NET tutorial to language theory posts. I still try to write an occasional tutorial post but I feel like whenever I post an opinion piece all of the tutorial subscribes immediately ditch my feed.
    2. Coming from this, readers can be largely represented by (somewhat overlapping) classes based on the type of material they are interested in reading. If you do not provide enough material your reader is interested in, they will look elsewhere.
    3. Language can present both a barrier to entry for some and a red carpet for others. Using difficult vocabulary or jargon will turn off people who don’t understand enough of it. However, most intellectuals feel at home with this kind of language and so won’t take you seriously if you don’t use it.
    4. Blogging is fun for it’s own sake. Although, having a large audience makes it more exciting.


    Conclusion

    In retrospect, much of what I’ve learned seems obvious now. However, just starting wasn't long enough ago that I can’t remember feeling that I was jumping into a big unknown. All I knew at that time was what I could infer from reading other successful blogger’s posts. That also is not to say that I don’t have a long way to go in terms of skill and general knowledge. I’ve learned much from both my successes and my failures in the past year but I know I still have a long way to go.

    Wednesday, February 18, 2009

    Streaming Video from FreeBSD to XBox 360..or Not

    After much searching, it seems as though the options for serving up video from my FreeBSD server to my XBox 360 are quite limited. With the help of google and a year old article on this very same topic, It seems as though there are five options:

    Name In Ports? Supports 360? Transcoding? Free? Last Release
    GMediaServer No With Patch No Yes 11/10/07
    MediaTomb Yes No Yes Yes 03/01/08
    360MediaServer No Yes Yes Yes 08/29/07
    uShare Yes Yes No Yes 12/09/07
    Fuppes No Yes Yes Yes 12/19/07

    Ahh, if only MediaTomb worked with the 360, life would be so much simpler.

    Given the options it looks like my only real choice here is Fuppes. Although, that means I’m going to need to manually compile and configure it. Good thing Fruppes comes with complete instructions on how to do this. Compiling things in FreeBSD can sometimes get kind of messy.

    Edit: Right after I got to this point ZFS vomited everywhere and now FreeBSD refuses to get past the boot0 stage unless my onboard SATA controllers are both disabled. If you are interested, read more about it on the luck.freebsd.questions mailing list.

    Update: I've since fixed this issue. FreeBSD 7 with ZFS is working great.

    ZFS with LZJB Compression

    I've still been working on tweaking out my FreeBSD-based 6 Disk ZFS setup. It's mainly going to be used for file storage and so it had crossed my mind that perhaps it would be best to use some type of filesystem level compression. Initially, as in all of the tutorials, I was going to use gzip. However, with a small amount of googling I discovered that LZJB compression is far superior in most cases. In fact, because it's cpu and memory utilization is so low, LZJB outperforms an uncompressed filesystem under many circumstances. This is mainly due to fewer bytes being read or written to the disk. Also, with the 2x-3x compression, I can easily bump up my redundancy from two to three copies of important files with almost no cost.

    The real question is, is my compressed data more susceptible to corruption? Of course, the answer to this is yes. Each bit is holding more information and so flipping it causes more information to be lost. It seems to me though that the triple parity more than offsets this. I just need to remember to set a zfs scan on crontab.

    Truthfully, using compression in a filesystem leaves me a bit uneasy under any circumstances. I lost more than one installation to stacker and diskdoubler corruption back in the day. It may not have been a lot of bytes I lost, but those bbs lists and space quest save files were worth a lot more to me then than any image or music file is to me now.

    Sunday, February 15, 2009

    Evernote API

    A while back I was complaining about some of the shortcomings I felt Evernote could easily fix. Well, it turns out I was wrong about some things. While they don't make it obvious anywhere inside the Evernote client app, its entire data interface is exposed.

    I'm quite tempted to try and roll my own memory resident capture app based on the following:
    • It's easy to find out which files a process has locked
    • In many cases, it's possible to find out which file contains the data being noted and where it comes from.
    • I want a more intuitive way to select a target notebook at capture time.
    I want more metadata about my links evernote.

    If had the time, I'd really like to make a note reading/taking app as well. Some ideas:
    • Better note organization
    • Private-Public encrypted note sections with a key store and auto decrypt.
    • Integration with my Outlook GTD system.
    Unfortunately, I'm concerned that the drawing stuff would be very time consuming to reimplement. The API is really nice guys but what's stoping you from making a client plugin system?

    Wednesday, February 11, 2009

    A Personal GTD and Ubiquitous Capture Review


    I thought it would be a good idea to do a public reevaluation of my current GTD/Ubiquitous Capture system and maybe get some feedback. I would love to hear any constructive suggestions on how I might improve it further or comments about how you take care of these issues differently using your own system.


    Categorization and Evaluation Systems

    Types of Information:
    • Already in a digital system.
    • Waiting to be put into a digital system.
    Categories of Information to Manage:
    • Events (Business/Personal) and (Importance)
    • Tasks (Business/Personal) and (Category)
    • Ideas
    • Contacts (Business/Personal) and (Links)
    • Communications (Phone/IM/Email)
    • General Data (Documents/Pictures/Videos)
    Ranking System:
    • Perfect - Always works
    • Great - Mostly works without issue
    • Good - Works with occasional issue
    • Poor - Works with constant, frustrating issue
    • Useless - Does not work

    Physical Systems in Play

    VX6700 Windows Mobile Phone:


    Capture/GTD Software:
    • Default Mobile Office
    Information Management Ratings By Type:
    • Events - Great - If it gets into my phone, I'll be there.
    • Tasks - Poor - Interface is much too difficult to use.
    • Ideas - Poor - Much too slow. Input style is very limiting. Interface is crappy.
    • Contacts - Good - Has difficulty dealing with quantity. Metadata is difficult to access.
    • Communications - Good - Makes phone calls fairly well, Texts are doable. Email is painful to use.
    • General Data - Useless - Not enough storage space. Document viewing is terrible.
    Conveniences:
    • Automatic Sync With Work and Home PC
    • Extreme Portability
    Limiting Factors:
    • Sluggish Input
    • Tiny, Low Res Screen
    • Very Limited Storage
    • Useless Camera
    • Limited to 2 Sync Targets
    • Only Friendly with Office
    • If the battery fails, I have no phone.

    Eee PC 1000 (Windows):


    Capture/GTD Software:
    • Evernote
    • Google Apps
    Information Management Ratings By Type:
    • Events - Useless - Most often in standby
    • Tasks - Good - Interface is usable but slow to access due to small keys and trackpad.
    • Ideas - Good - Evernote is the best idea capture I've found yet. It Still sucks.
    • Contacts - Poor - Does not sync well with my Windows Mobile Phone (not enough sync targets).
    • Communications - Great - With Wifi, everything on the net. Even phone with Skype.
    • General Data - Poor - Often in standby. If I forget to sync, I don't have my stuff.
    Conveniences:
    • Long Battery Life (10 Hours+)
    • Heavy Web Integration, Evernote and Gmail Syncing
    Limiting Factors:
    • Cannot connect to Windows Mobile Phone (Not Enough Sync Targets)

    Dual-Monitor Desktop (Office):

    Capture/GTD:
    • Outlook 2007
    • Jello Dashboard
    • OneNote 2007
    • Pidgin
    • SharePoint (MOSS)
    Information Management Ratings By Type:
    • Events - Poor - Always seem to pop up in batches. Frequently interrupts me while doing work.
    • Tasks - Great - Smooth task evaluation and management. However, Jello Dashboard is really ugly.
    • Ideas - Good - While OneNote is really flexible, it also takes a lot of time to figure out how to do anything.
    • Contacts - Good - Even with Xobni, it's difficult to keep people and data associated strongly. For example, If you forget a name, good luck finding that person. It's also difficult to distinguish my personal contacts from business.
    • Communications - Poor - Pidgin looks nice but frequently crashes.
    • General Data - Poor - Because SharePoint does not like files that aren't documents, I'm often forced to keep different parts of projects inside SharePoint while others are kept in folders. The biggest issues here are code and compiled programs.
    Conveniences:
    • Fast Task Management
    • Easy Collaboration (for some kinds of data)
    Limiting Factors:
    • Office is often very slow.
    • Office is bad at keeping data associations.
    • My contacts are a terrible mess.
    • My desktop is often cluttered with way too many programs which are all part of some activity categorically. It frustrates me to no end that I have no way to associate them and optimize my work.

    Notecards in Back Pocket:

    Information Management Ratings By Type:
    • Events - Poor - I did the Hipster PDA thing in college, it only worked because I obsessively checked it.
    • Tasks - Poor - Lists are hard to order on paper. Often tasks wont make the leap from here to Office/Google.
    • Ideas - Great - No limitations other than the size of the card. The only difficulty is digital capture/organizing.
    • Contacts - Poor - Just as with Tasks, it's difficult to sort on paper.
    • Communications - Useless - They are even hard to throw at people.
    • General Data - Useless - I once had this software that would let you print binary data as dots and then read it with a scanner. It was pretty much just for fun though.
    Conveniences:
    • It's easy to convey ideas with the freedom of paper.
    Limiting Factors:
    • Paper makes for poor integration.

    Conclusions

    Software Packages Most Frequently Used:
    • Microsoft Office 2007, Mobile and MOSS - Office Email/Contacts/Tasks/Events
    • Evernote - Ideas/Notes/Clippings
    • Google - Email/Calendar
    Best Covered Area: Ideas
    Worst Covered Area: Misc Data, Physical Documents


    Resolving Major Difficulties with My System

    1. Digitizing/Categorizing paper and notecards takes a lot of time and so I often avoid it.


    Why? I schedule to do notecards and papers both once a week at the same time. At the same time my filing cabinet hasn't been cleaned out or reordered in years and so is slow to use.

    Fix:
    First, make sure all Notecards I care about get digitized and then thrown out. I will put them in a separate, high priority, "Notecard Inbox". Second, I will rework my home filing cabinet, throw out old files, and reevaluate which types of paper to keep for extended periods as well as how to order said paper.


    2. Managing several tasks at once on my office computer can be frustrating.


    Why? My desktop is always cluttered with active programs.

    Fix: Try out a multi-desktop extension tool. I've seen a few of those go by on LifeHacker.


    3. Keeping track of which Projects/Events/Tasks/Contacts/Ideas/Data/Paper are related to which is all but impossible.


    Why? None of the tools I currently use seem to be designed to be used in that way.

    Fix:
    Explore the possibility of using a relational data management system or a office plugin of some kind.


    4. Similarly, searching my Projects/Events/Tasks/Contacts/Ideas/Data/Paper cloud is impossible.


    Why? I currently have no software infrastructure in place to do this.

    Fix: Give google desktop indexing another try. Now with a second hard disk and extra memory it may have acceptable performance.


    5. I frequently don't get around to looking at things I tagged "later".

    Why? I have too many systems in place and too much information available.

    Fix:
    Explore the possibility of a system that can track text, videos, audio and paper for queued absorption. It must have a fast interface, be able to sync for offline use, and be usable everywhere.


    6. I often have trouble with GTD and Ubiquitous Capture on the go.

    Why?
    My Windows Mobile phone is three years old and just hanging on. It can't run any of the newer software and often suffers from input lag.

    Fix: Maybe it's time to get a new phone. Although, I was hoping to hold out for a Windows Mobile phone with both a decent amount of storage and a keyboard to hit the market. I'm starting to feel that it will never come.

    Friday, February 6, 2009

    Getting Some Serious Analytics

    I decided tonight to wire up my personal and professional blogs with google analytics. Using it I hope to learn more about what types of posts readers are most interested in so that I can write more along those lines.

    Actually, we already use google analytics for our work blogs at Atalasoft. Our whole site is wired up with it. However, the analytics information is only really used for measuring the evaluation funnel by our marketing department. While I could probably see the data if I wanted to, I doubt I'd be able to set up custom goals and such. This is why I added my own additional calls to analytics.

    The most difficult part was wiring things up to do tracking of miscellaneous links and buttons. I had to look up each button/link id in firebug and write some javascript to do the onclick assignments. This is the code I ended up using on my work blog:


    function SetYourGoals()
    {
    if (typeof(_gat) != "object") { return false; }
    if (!document.getElementById) { return false; }


    var googlePageTracker = _gat._getTracker("??-???????-?");
    googlePageTracker._trackPageview();

    var myGoogleGoals = new Array();
    myGoogleGoals['bp___v___bt___s___rss'] = "/goal/feed";
    myGoogleGoals['bp___v___bt___s___atom'] = "/goal/feed";
    myGoogleGoals['bp___v___bt___s___email'] = "/goal/feed";
    myGoogleGoals['bp___v___bs___lcl___Categories_ctl02_Links_ctl01_Link'] = "/goal/personalmisc";
    myGoogleGoals['bp___v___bs___lcl___Categories_ctl02_Links_ctl02_Link'] = "/goal/personalmisc";
    myGoogleGoals['bp___v___bs___lcl___Categories_ctl02_Links_ctl03_Link'] = "/goal/personalblog";

    for (var i in myGoogleGoals)
    {
    var ahref = document.getElementById(i);
    ahref.onclick = function()
    {
    try
    {
    googlePageTracker._trackPageview(myGoogleGoals[i]);
    }
    catch (err)
    {
    }
    return false;
    }
    }
    }

    function AsyncSetYourGoals()
    {
    window.setTimeout(SetYourGoals, 5000);
    return false;
    }


    Anyone get the CIV reference?

    Thursday, February 5, 2009

    FreeBSD ZFS Running Strong, Rocking Hard

    I decided about a month ago to back up my current Ubuntu-based file server and put ZFS with raidz on it. Tonight, I finally got around to doing it. If you haven't heard about ZFS or Raidz, I suggest being ready to have your mind blown.

    I've been a FreeBSD user on and off since about 2.6. Initially, I only chose it because I wanted the help of all of my FreeBSD guru buddies on IRC (I really should drop in and say hi to those guys, it's been too long). It really grew on me though. I feel extremely comfortable with it's kernel, filesystem layout and ports system.

    That's why I was so happy to hear that it supported ZFS out of the box in 7.0. However, after a bit of research and reading about all the tuning necessary and how out of date the FreeBSD 7.1 branch of ZFS was, I thought I'd go to the source instead. I decided to try OpenSolaris.

    Unfortunately, OpenSolaris 2008.11 refused to boot. It would get to the copyright notice and just lock up solid. I messed around with it and tried some kernel flags but nothing seemed to help. In the end I went back to my old friend FreeBSD.

    The last version I used was 5.0 as a file server in my dorm room. One of the best things about FreeBSD is it has a winning formula and it hasn't changed much in all of the years i've used it. I just picked it up and everything was right where I remembered it being. Microsoft could learn from this example of strong consistency (cough, office2007-vista-windows 7, cough).

    The only issue I ran into is lack of support for my integrated NIC. I didn't feel too bad about that though. The server is running on an old AS-ROCK 939Dual-SATA2 motherboard with Realtek RTL8201CL onboard. It's pretty much trash anyway. So I went over to my box of PCI cards, grabbed an old Intel 82558 Pro NIC, and off I went.

    After that it was a hop skip and a jump to getting ZFS going. I added the standard lines to my loader.conf to tune for conservative memory usage and sprinkled in some vfs.zfs.vdev.cache.size="5M" to compinsate for my meager one gigabyte of ram. After that simply added zfs_enable="YES" to my rc.conf and off I went. One command and less than a second later it was finished:

    # zpool create pool raidz2 ad4 ad6 ad8 ad10 ad12 ad16

    I had a full 2TB, 6 drive, double parity raidz array ready to go. Wow.

    For an even bigger shock take a walk down memory lane with me and check out what I had to go through to build my RAID-5 array in FreeBSD 5.0. What a pain in the ass that setup was. The funniest part is that it's not even taking into account how software raid in 5.0 would randomly crash or how the raid service would have to be manually started from a command line after each boot.

    Wednesday, January 28, 2009

    Hey Merlin Mann, where's my digital Banker's Box?

    I've been a 43folders fan for quite a while. In college the hipster pda rescued me from procrastination and made me a mean lean work machine. However, lately I've moved more into the GTD camp as I have the power of outlook available to me both at my desk and on my phone.

    It may then come to no surprise to you that I was excited to hear Merlin Mann was giving a talk at MacWorld. Earlier today I watched his talk "Toward Patterns for Creativity" on youtube and while I didn't find it to have much content I wasn't already familiar with, one thing stood out: The Banker's Box.

    The Banker's Box is a place where everything relating to a project (or context) goes. This way, while juggling several projects it's always possible to find the materials you need. A great idea in the physical world but I wish I had something which could meet that criteria in the virtual one. As it stands I'm using several different applications, each with a specific type of content.

    For example, I've been using Evernote as a way to collect notes for a while now, but as an all encompassing "Banker's Box" solution, it's fallen short. The main issue seems to be that the ways I can manage content with Evernote are very limited. These limitations fall into a few categories and are shared by every single piece of software I use:

    The first is grabbing content. While Evernote is fantastic at grabbing a chunk of a webpage, it is significantly less good at doing the same inside a PDF file. It can hold any kind of text and so it's entirely possible to put code into it, but it lacks any syntax highlighting and so the code becomes difficult to read. Also, it integrates with almost nothing and is very proprietary. C'mon already Evernote, throw us a bone here. If you were actually able to get a plugin system going and some content sharing standards in place, I'd be willing to actually give you money.

    My ideal content collection application would be able to reach into any app I was using, grab my selected content and associate with it the name of the app and the file/location the content was from. I also want to be able to click on something in my content manager and have it take me directly to that data if it's available. While you are at it, also cache the full source of the content. Disk space is super cheap.

    The second is management of content. Evernote has a terrible interface for managing a large number of notebooks. I'm only up to 8 and it's already gotten difficult to deal with. It also does not allow any kind of grouping of notebooks. What Evernote needs is contexts. You put notebooks into a context and when you change to that context, it's all right there for you. I also want it to be easy to change contexts on the fly without having to navigate a GUI. I rarely use Evernote's Firefox capture because I'm never sure exactly which notebook it will go into. This is easy to fix with contexts. First make it obvious which context I am in. Second, send to a incoming queue for that context so that I can easily sort it into a notebook at a later time.

    The third is archival. In any kind of content management system it should be easy to archive sets of data that may no longer be in use but also may still be useful someday. Really, storage is so cheap and search so good that we should never throw anything away unless keeping it has some kind of associated negative. Yet, I don't want the ghosts of projects past always cluttering what I am working on now. Take a cue from gmail and let me archive things to keep them out of my way but leave them searchable.

    The fourth is integration. In the end what I want is a set of tools that integrate my entire life. Why is it that the idea of linking information been so prominent on the web yet on the desktop it's in it's infancy? Every chunk of data in my life should have a unique ID and be able to be referenced from any other chunk of data. I should never have to see or type in this ID, the system should be both implicit in information capture and also intelligent in letting me select from content I would likely want to associate.

    Microsoft Office is getting closer (kind of). However, it infuriates me that I need to attach a contact or email to a task or appointment like it's some kind of file. Even worse, if I update that contact's information in my address book the copied information isn't updated. Also, if Microsoft wants to be the one to provide me with my life management solution, it needs to get it's act together around search.

    As a software engineer I know that this is all a tall order but I also know that this it is entirely possible. I fear that the only reason we don't already is due to consciously erected anti-competitive barriers. What do we need to make it happen?
    1. Well defined standards for chunking data, referencing data and interchange of data. If I could pick applications I knew could talk and place nice with each other I wouldn't use anything else. I suspect that in general it would be a fantastic competitive advantage for anyone who opted in.
    2. A unique id for every chunk of data. Given a unique ID, any chunk of data in existence could reference any other. Computers are fast and have tons of memory and so these IDs could be quite large. Part of this ID could easily be related to a person and machine. In this way it would be unnecessary to worry about collisions.
    3. Caching in the cloud. In this case, I can't help thinking a system like Freenet is the best direction to move in. Everyone would just cache each other's data altruistically, with heavy encryption of course. Even if that's not possible, we could store in the cloud and cache locally. I'm sure many businesses would be willing to perform this service for a nominal fee. At the very least the metadata for everything would be in the cloud. In this way even if you couldn't find the specific chunk, you might be able to find a superset, subset or copy with a different id via search.
    And that's all. Those three conditions are sufficient for an all-encompassing, self-referencing, content management system to be built. I hope very much that this is where we are headed because, while my current system of Evernote-Outlook-Gmail works both on my desktop and on my Microsoft SmartPhone, it by no means meets the entirety of my needs.

    Like anyone else, the less I need to worry about my systems the more I can focus on the content I am generating. I'm completely fed up with filling my life with the tons of little steps necessary to move data between applications which simply, just refuse to share data with each other easily because the people who made them want to lock you in. I'm completely willing to jump ship to any products that fulfill my need for open integration and many other feel the same I'm sure. The future is tight but open integration and the sooner it happens the better off we will all be.

    Phew, that was quite a rant.

    Wednesday, January 21, 2009

    Setting Up The New EEE PC 1000

    I just recently got an EEE PC1000. It's cool, it's tiny and it came with linux.

    I really like Linux and use it at home on my file server, but it didn't take me long to realize that my career as a Software Engineer using Microsoft products just won't allow me to have a linux only laptop. So today I wiped off Xandros and installed a super slimmed down version of XP.

    1. I started with a heavily stripped down version of XP. I was especally merciless when it came to drivers and services I knew I wouldn't need. It's easy to do with NLite.
    2. Disabled Prefetching, System Restore, Page File, and The Indexing Service.
    3. Installed Evernote, Firefox and Open Office
    4. Installed a few Firefox plugins to make the most of my screen real estate.


    Future Plans: