There is a great tutorial on using ufraw from a bash script here, my only recommendation is converting to PNG but it is entirely up to personal preference.
PNG version:
pef2png.sh
#!/bin/bashif[!-d ./processed_images ]; thenmkdir ./processed_images; fi;
# processes raw filesfor f in*.pef;
doecho"Processing $f"
ufraw-batch \
--wb=camera \
--exposure=auto \
--out-type=png \
--compression=96 \
--out-path=./processed_images \
$fdonecd ./processed_images
# change the image namesfor i in*.png;
domv"$i""${i/.png}"_r.png;
donefor i in*.png;
domv"$i""${i/imgp/_igp}";
done
#!/bin/bash
if [ ! -d ./processed_images ]; then mkdir ./processed_images; fi;
# processes raw files
for f in *.pef;
do
echo "Processing $f"
ufraw-batch \
--wb=camera \
--exposure=auto \
--out-type=png \
--compression=96 \
--out-path=./processed_images \
$f
done
cd ./processed_images
# change the image names
for i in *.png;
do
mv "$i" "${i/.png}"_r.png;
done
for i in *.png;
do
mv "$i" "${i/imgp/_igp}";
done
Usage:
# Convert all pef files in the current directory to png
./pef2png.sh
# Convert all pef files in the current directory to png
./pef2png.sh
GitHub is a pleasure to work with. In 10 years I created 5 or 6 projects on SourceForge, I’ve already created 3 new projects in less than a year on GitHub because they are just so easy to setup. An early design decision for SourceForge was to make deleting projects hard. The idea was that a project should never die, just “retire” until a new maintainer comes along. I don’t think I have users of my libraries or applications let alone prospective maintainers (Actually, I have had a few emails and bug reports for GetFree and it has had quite a few downloads, but that was when it was new and exciting). I wish I could change my projects’ statuses to “unmaintained”. After 1 or 2 years of being unmaintained, if it hasn’t been adopted by another user then after a warning email it should be automatically deleted. There are no orphaned projects on SourceForge, only projects that the owner lets stagnate.
Anyway, this post isn’t about SourceForge’s flaws, it is about GitHub. Half of GitHub’s success is due to using Git. There is a very handy application which works in tandem with git, called git-svn which makes migrating from svn painless. You can setup mappings from svn users to git users, clone an svn project into a git project with full history and for those that have to use legacy svn repositories, it can even commit back to svn. Once you get your application into GitHub though you will probably want to stick with git. This is where the fun begins.
You can fork other projects in seconds, work on your own branch for a bit (Fixing bugs, implementing new functionality, etc.) and then send a pull request to have the owner merge your changes back again (And merging has been fixed in git, unlike svn, it doesn’t make me groan any more). GitHub keeps a record of where a project originated and can generate nice graphs of where projects were forked and merged, as well as statistics about which languages were used, when each developer typically commits, as well as all the basics such as bug tracking, web view of the repository, and activity logs for each project and each user.
On the weekend I was installing the new version of Mythbuntu (More interesting screenshots here) and I had a weird error, “No root file system is defined”. At first I thought it must have been something to do with failing to recognise the existing partitions or possibly they were corrupt. “fdisk -l” worked, returning sda, sda1-sda4, which was correct however “mount” would always fail. It turned out that it was just our old friend dmraid was breaking in new and unexpected ways. Here is how to worked around it:
Boot into live CD mode
Remove dmraid via Package Manager
Run Install Mythbuntu from the desktop shortcut
On multiple distributions and motherboards I consistently have problems with dmraid not finding/incorrectly identifying partitions/drives. I’m not the only one with these problems. I’m sure I am having these problems because I have raid hardware but am not using raid. Surely raid is an advanced enough feature that people with raid should be expected to know to install/add it? Perhaps the install could be attempted without raid support and the installer can say “Do you use raid?” or “Are these devices correct?” and at this point the installation restarts/redetects with dmraid enabled.
FireFox add-ons such as 1-Click YouTube Video Download are good but one must visit every video and manually save it. I have a lot of favourites and I wanted to back them up with as little effort as possible. Here is how.
You will need youtube-dl. It is available via yum/apt-get however the version provided may be quite old (In Fedora 14 it did not support –playlist-start and –playlist-end).
Create a new playlist and add all the videos from your favourites (Note: A playlist can only contain 200 videos if you have more than than this you will have to make multiple playlists and repeat this process for each one).
Click on “Play All”. Copy the URL from the address bar for the first video when it tries to play (You only need this bit: “http://www.youtube.com/view_play_list?p=AE198A14013A3005”).
Now tell youtube-dl to download the videos in the playlist:
If you had more than 200 videos repeat this process for the ones that didn’t fit in the playlist.
This should work for other people’s playlists too.
You might want to make a note of the latest video in your favourites so that the next time you can backup only the new videos.
This process still requires some user interaction, if you find an even easier way where I can just say, “This is my user name, make it happen”, I’d love to know about it.
malloc: *** error for object 0x3874a0: double free
***set a breakpoint in malloc_error_break to debug
malloc: *** error for object 0x18a138: Non- aligned pointer being freed
*** set a breakpoint in malloc_error_break to debug
So basically something in your code is screwing around with memory.
Either releasing something that has already been released:
int* x =new x[10];delete[] x;delete[] x;
int* x = new x[10];
delete [] x;
delete [] x;
Or releasing something that is not pointing to the start of an allocated block of memory:
int* x =newint[10];
x++;delete[] x;
int* x = new int[10];
x++;
delete [] x;
The error message isn’t very clear if you have no experience with GDB. GDB is a debugger for your binaries. It allows you to set break points at the start of a function and any time that function is called your application will pause and allow you to debug in GDB. We can then get valuable information back by executing commands to get the backtrace, registers state and disassembly. The advantage of using GDB over Xcode/KDevelop is being able to break into any function, not just functions in your source code. Anyway, this is how I got the backtrace to find out where in my sourcecode I was making a mistake:
gdbfile myapplication
break malloc_error_break
run
backtrace
gdb
file myapplication
break malloc_error_break
run
backtrace
Now whenever a double free or non- aligned pointer is freed it will break into gdb and we can type in “backtrace” and work out what our code did to trigger this.
After an update (Upgrade?) a while ago I couldn’t boot into Fedora, it had the text mode bar graph and after getting to 100% it failed with this error message:
Cannot open /dev/sda1: Device or resource busy
Cannot open /dev/sda1: Device or resource busy
It turned out that this was a dmraid problem. It would appear that something changed when updating and added or enabled dmraid. So I had to find a way to remove or disable it, the simplest solution I found that worked was disabling it via the arguments to the kernel in GRUB.
Edit menu.lst (Or grub.conf, my menu.lst is a symbolic link to grub.conf)
su
gedit /boot/grub/menu.lst
su
gedit /boot/grub/menu.lst
Find the entry that you are currently booting into and add “nodmraid” to the end of the “kernel” line:
# Recursively search for folders called .svn and delete them (Even if not empty)find . -name .svn -type d -print|xargsrm-rf# Recursively search for files called *~ (gedit creates these for temporarily saving to) and delete themfind . -name \*~ -print|xargsrm-rf# Recursively search for files called *.h or *.cpp and print themfind-name'*.h'-o-name'*.cpp'-print
# Recursively search for folders called .svn and delete them (Even if not empty)
find . -name .svn -type d -print | xargs rm -rf
# Recursively search for files called *~ (gedit creates these for temporarily saving to) and delete them
find . -name \*~ -print | xargs rm -rf
# Recursively search for files called *.h or *.cpp and print them
find -name '*.h' -o -name '*.cpp' -print
So I have a few projects on SourceForge, but they’re all hosted via SVN. With allthisdistributed version control going on I thought I would like to get in on the action. I thought about moving to github, but what happens in 5 or 10 years when I want to move on to another revision control software? The name has a limited life expectancy. Anyway so I wanted to switch so I researched alot, this is the best information I found and this blog entry is basically a rehash specific to SourceForge (Because I didn’t find any good SourceForge specific information).
Note: For this tutorial you will want to change all occurrences of USERNAME to your user name for example “pilkch”, PROJECTNAME to your project name for example “breathe” and YOURFULLNAME to your full name for example “Chris Pilkington”.
First of all we need to download git and git-svn:
suyum installgitgit-svn
su
yum install git git-svn
Now we are going to create our repo directory for holding our repositories:
cd ~
mkdir repo
cd ~
mkdir repo
Create a users.txt file to map our subversion users to git users:
As the other article says, we basically check out a svn directory as a git repository:
cd repo
mkdir PROJECTNAME_from_svn
cd PROJECTNAME_from_svn
gitsvn init http://PROJECTNAME.svn.sourceforge.net/svnroot/PROJECTNAME/PROJECTNAME --no-metadatagit config svn.authorsfile ../users.txt
gitsvn fetch
cd repo
mkdir PROJECTNAME_from_svn
cd PROJECTNAME_from_svn
git svn init http://PROJECTNAME.svn.sourceforge.net/svnroot/PROJECTNAME/PROJECTNAME --no-metadata
git config svn.authorsfile ../users.txt
git svn fetch
Check that worked (Just read the last few changes to make sure svn history is present, you can hit spacebar to scroll back a page of history or two just to make sure):
git log
git log
From now on we can use git commands, first of all want to create a copy of the git-svn repository:
cd ..
git clone PROJECTNAME_from_svn PROJECTNAME
cd ..
git clone PROJECTNAME_from_svn PROJECTNAME
PROJECTNAME/ now contains our “clean” repository and PROJECTNAME_from_svn can be deleted if you like. We now just need to add and push our local repository to the remote location:
cd PROJECTNAME
git config user.name "YOURFULLNAME"git config user.email "USERNAME@users.sourceforge.net"git remoterm origin # This may not be necessary for yougit remote add origin ssh://USERNAME@PROJECTNAME.git.sourceforge.net/gitroot/PROJECTNAME/PROJECTNAME
git config branch.master.remote origin
git config branch.master.merge refs/heads/master
git push origin master
cd PROJECTNAME
git config user.name "YOURFULLNAME"
git config user.email "USERNAME@users.sourceforge.net"
git remote rm origin # This may not be necessary for you
git remote add origin ssh://USERNAME@PROJECTNAME.git.sourceforge.net/gitroot/PROJECTNAME/PROJECTNAME
git config branch.master.remote origin
git config branch.master.merge refs/heads/master
git push origin master
Now to check that this is working you can browse to the git page of your SourceForge project and there should be data in your repository. And we can clone our repository back again to check that everything is working.
You may also want to ignore certain types of files, place a file called .gitignore in the root directory of your project and fill it with the patterns you want ignored:
.gitignore
.DS_Store
.svn
._*
~$*
.*.swp
Thumbs.db
.DS_Store
.svn
._*
~$*
.*.swp
Thumbs.db
Now when we want to update we can do:
git commit-a-m"This is my commit message."# All changes to the local repository need to be committed before we try merging new changesgit pull# Grab any changes from the main repository
git commit -a -m "This is my commit message." # All changes to the local repository need to be committed before we try merging new changes
git pull # Grab any changes from the main repository
Committing is slightly different:
git add .gitignore # For example we might want to add our new .gitignore filegit commit-a-m"This is my commit message."# Note: Your commit has now only been staged, it is not in the main repository yetgit push# Now it is pushed into the main repository
git add .gitignore # For example we might want to add our new .gitignore file
git commit -a -m "This is my commit message." # Note: Your commit has now only been staged, it is not in the main repository yet
git push # Now it is pushed into the main repository
The last step is to remove your svn repository which for SourceForge is as simple as unchecking a checkbox on the Admin->Features page.
$ ls -l
total 24
drwxrwxr-x ... thisisadirectory
-rw-rw-r-- ... thisisafile
lrwxrwxrwx. ... thisisalink -> /media/data
We know what the rwx fields are but what about d, – and l? Ok, those are pretty obvious too, here is a list of the more obscure ones because I always forget.
d Directory. l Symbolic link. – Regular file.
b Block buffered device special file. c Character unbuffered device special file. s Socket link. p FIFO pipe. . indicates a file with an SELinux security context, but no other alternate access method.
s setuid – This is only found in the execute field.
If there is a “-” in a particular location, there is no permission. This may be found in any field whether read, write, or execute field.
The file permissions bits include an execute permission bit for file owner, group and other. When the execute bit for the owner is set to “s” the set user ID bit is set.
This causes any persons or processes that run the file to have access to system resources as though they are the owner of the file. When the execute bit for the group is set to “s”,
the set group ID bit is set and the user running the program is given access based on access permission for the group the file belongs to.
I’ve just created a very small C++ wrapper (libxdgmm) for accessing XDG more easily. To use it, you need libxdgmm.h and libxdgmm.cpp. Just add these to your project and then use them like so:
#include <libxdgmm/libxdg.h>int main(int argc, char** argv){if(!xdg::IsInstalled()) std::cout<<"XDG is not installed"<<std::endl;else{
std::string data;
xdg::GetDataHome(data);
std::cout<<"data=\""<<data<<"\""<<std::endl;
std::string config;
xdg::GetConfigHome(config);
std::cout<<"config=\""<<config<<"\""<<std::endl;// Obviously these have to exist to work. You can translate the error code returned by calling xdg::GetOpenErrorString(int result);
xdg::OpenFile("/home/chris/dev/cMd3Loader.cpp");
xdg::OpenFolder("/home/chris/");
xdg::OpenURL("http://chris.iluo.net");}returnEXIT_SUCCESS;}
#include <libxdgmm/libxdg.h>
int main(int argc, char** argv)
{
if (!xdg::IsInstalled()) std::cout<<"XDG is not installed"<<std::endl;
else {
std::string data;
xdg::GetDataHome(data);
std::cout<<"data=\""<<data<<"\""<<std::endl;
std::string config;
xdg::GetConfigHome(config);
std::cout<<"config=\""<<config<<"\""<<std::endl;
// Obviously these have to exist to work. You can translate the error code returned by calling xdg::GetOpenErrorString(int result);
xdg::OpenFile("/home/chris/dev/cMd3Loader.cpp");
xdg::OpenFolder("/home/chris/");
xdg::OpenURL("http://chris.iluo.net");
}
return EXIT_SUCCESS;
}
I still have to wrap some of the other functionality, such as XDG_DESKTOP_DIR, XDG_DOCUMENTS_DIR, XDG_MUSIC_DIR, desktop-file-utils, xdg-desktop-menu and xdg-desktop-icon etc. I will wrap these as I need them (Or at special request). I don’t think I will be supporting xdg-screensaver or xdg-mime as I don’t have a use for them right now.
However, don’t go the websites, all of these are available in the (Default?) repositories, so you can either install them via yum, PackageKit or apt-get. Also note: RapidSVN and Meld are only needed if you want to use SVN. Even KDevelop is not required if you have another text editor that you prefer such as gedit/vi/emacs. If you want to create your provide your own make file then you don’t need cmake either.
Anyway, so a simple application that just tests that you can do a 64 bit compile is pretty straight forward. 1) Create your main.cpp file with a int main(int argc, char* argv[]); in it.
# Set the minimum cmake versioncmake_minimum_required(VERSION 2.6)# Set the project nameproject(size_test)# Add executable called "size_test" that is built from the source file# "main.cpp". The extensions are automatically found.add_executable(size_test main.cpp)
# Set the minimum cmake version
cmake_minimum_required (VERSION 2.6)
# Set the project name
project (size_test)
# Add executable called "size_test" that is built from the source file
# "main.cpp". The extensions are automatically found.
add_executable (size_test main.cpp)
2) Create a CMakeLists.txt that includes your main.cpp.
As you can see this is specific to x86_64. The beauty of gcc is that by default it compiles to the architecture it is being run on. I had previously thought that it would be a world of pain, making sure that my compiler built the right executable code and linked in the correct libaries. I know this project doesn’t use any special libraries, but (because of cmake?) the process is exactly the same as using cmake under 32 bit to make 32 bit executables. You just make sure that they are there using Find*.cmake and then add them to the link step:
SET(LIBRARIES
ALUT
OpenAL
GLU
SDL
SDL_image
SDL_net
SDL_ttf
)# Some of the libraries have different names than their Find*.cmake nameSET(LIBRARIES_LINKED
alut
openal
GLU
SDL
SDL_image
SDL_net
SDL_ttf
)FOREACH(LIBRARY_FILE ${LIBRARIES})Find_Package(${LIBRARY_FILE} REQUIRED)ENDFOREACH(LIBRARY_FILE)# Link our libraries into our executableTARGET_LINK_LIBRARIES(${PROJECT_NAME}${LIBRARIES_LINKED})
SET(LIBRARIES
ALUT
OpenAL
GLU
SDL
SDL_image
SDL_net
SDL_ttf
)
# Some of the libraries have different names than their Find*.cmake name
SET(LIBRARIES_LINKED
alut
openal
GLU
SDL
SDL_image
SDL_net
SDL_ttf
)
FOREACH(LIBRARY_FILE ${LIBRARIES})
Find_Package(${LIBRARY_FILE} REQUIRED)
ENDFOREACH(LIBRARY_FILE)
# Link our libraries into our executable
TARGET_LINK_LIBRARIES(${PROJECT_NAME} ${LIBRARIES_LINKED})
Note that we don’t actually have to specify the architecture for each package or even the whole executable. This is taken care of by cmake. Anyway, it is not some mysterious black magic, it is exactly the same as you’ve always been doing. Cross compiling is slightly different, but basically you would just specify -m32 and make sure that you link against the 32 bit libraries instead. If I actually bother creating another 32 bit executable in my life I’ll make sure that I document right here exactly how to do a cross compile from 64 bit.
The advantages of 64 bit are mmm, not so great unless you deal with really big files/memory ie. more than 4 GB. Perhaps more practical are the extra and upgraded to 64 bit registers so you may see an increase in speed or parallelisation of 64 bit operations, for example a game may want to use 64 bit colours (ie. 4x 16 bit floats instead of 4x 8 bit ints to represent an rgba pixel.
Things to watch out for:
int is still 32 bit! If I were implementing the x86_64 version of C++ standard/gcc/in a perfect world this would have been changed to 64 bit, ie. “int” would be your native int size, it’s logical, it makes sense. However, I do understand that this would have broken a lot of code. The problem is, if int != architecture bits, then why have it at all, why not drop it from the standard and just have int32_t and int64_t and be done with it. Then if a program chooses it can have:
typedefint32_tint;
typedef int32_t int;
or
typedefint64_tint;
typedef int64_t int;
as it sees fit. Anyway.
Pointers are now 64 bit! So you can use them with size_t but cannot use them with int
Under linux x86_64 gcc sizeof(int*) == sizeof(function*), however, this is not guaranteed anywhere. It may change on a certain platform/compiler. Don’t do stuff like this:
I have been dipping my toe into x86_64 waters sporadically over the last couple of years. On each of the previous occasions it always seemed too immature, packages were way to hard to come by (I prefer precompiled binaries), half my hardware didn’t work, strange crashes etc. Seeing as this episode has been 100% successful, I thought this time I would document it.
Fedora
My favourite distribution is Fedora due to it’s rapid development and ease of use. I downloaded via BitTorrent. (Obviously) make sure you get the x86_64 version. I always like to run the sha checksum to rule out that as the problem if something does arise later. I also make sure that my DVD verifies in my burning program after it has been burnt.
Now we are ready to install. Unless you have something really exotic you should not need any special drivers or anything (At least not until after the install), it should just work. The important parts of my hardware are:
Asus A8V-E SE (Not awesome, my awesome motherboard blew up causing me to downgrade to this one I had lying around) AMD Socket 939
AMD Athlon 64 X2 4800+ CPU
nVidia GeForce 8600 GT 256MB PCIe
I use the onboard LAN and sound card, as well as 2 SATA drives, an IDE drive and an IDE CDROM.
So I installed Fedora from the DVD. You can again choose to verify the media, weirdly (And in previous versions as well) this check always seems to fail even though the sha check and burning software verification succeed, so either the check is broken or the motherboard/drive is broken. I have never seen this verification succeed in my life. Anyway, I skip it now and the options I select (At appropriate times) are fresh install onto a blank drive, “Software Development” profile/packages (You can probably turn off the other profiles, you can install any required packages individually later on when you are in the OS anyway). Next time I do an install I would love to try an upgrade install.
That should all install (You don’t have to get too serious about selecting the right packages right now, I find it easier to install “generally” what I need (“Software Development”) and then customise later) and you should now be logging into a fresh install of Fedora 10.
Initially I had some problems with an additional PCI sound card I had present due out of habit because I had never gotten my on board sound to work for any motherboard under Linux. Some programs were using the onboard and some where then using the PCI one, so I rebooted and went into the bios to disable the onboard one. Both still get detected. Apparently this is a common problem with this motherboard. I went to update the bios and wouldn’t you believe it, the bios updater is Windows only. Anyway, because the onboard sound card was being detected I just removed the PCI one and enabled the onboard one again. That fixed it up awesomely and I had audio, yay. Also removing PulseAudio can “unconfuse” applications and force them to use ALSA,
yum remove alsa-plugins-pulseaudio
I then noticed that I had some issues with audio playback stuttering, cycling through normal speed and then fast for a second and then normal again. I fixed it by following this tutorial.
Add the Livna repo by downloading and running the add repo rpm, it is not linked to on the main page, but the url can be built from the other releases. Add the RPMFusion repo by downloading and running both the free and non-free add repo rpm.
For my information: RPMFusion provides additional packages that are not in the base Fedora repos. Livna provides the same packages as RPMFusion, but also provides the libdvdcss package for watching DVDs.
I have never had much luck with ATI drivers for Linux. I had heard the nVidia ones were easier to install and configure and apparently faster to boot. Before you install drivers, you might want to get a benchmark of your FPS in glxgears before installation:
glxgears
I downloaded and installed the nVidia (Binary, proprietary) driver: sudo yum install kmod-nvidia
Now reboot (It’s the easiest way to restart X). Test that hardware accelerate rendering is happening by looking for in the output of this command: glxinfo | grep direct
And your glxgears FPS should be above 2000:
glxgears
Adobe recently released an x86_64 Linux version of Flash so we don’t have to mess around with nswrapper etc. any more. I downloaded it from here, extracted, su, cp ./libflashplayer.so /usr/lib64/mozilla/plugins, restarted Firefox. You may want to test it also.
Nexuiz
For my benefit for next time, I also like:
Neverball and Neverputt
VDrift
Torcs
Nexuiz
Open Arena
Urban Terror
XMoto
I have not provided any links to these as they are all present in PackageKit which comes with Fedora 10.
Also for my information: Firefox Add Ons
Adblock Plus
Flashblock
NoScript
PDF Download
FireBug
Nightly Tools Net Usage Item
Open links in Firefox in the background
Type about:config into the address bar in Firefox, then look for the line browser.tabs.loadDivertedInBackground and set it to true.
Automatic Login
su gedit /etc/gdm/custom.conf
And adding this text:
[daemon]
# http://live.gnome.org/GDM/2.22/Configuration
TimedLoginEnable=true
TimedLogin=yourusername
TimedLoginDelay=30
NTFS Drives
Gnome automatically finds and mounts NTFS drives/partitions, however in Fedora 9 and later, ownership is now broken. Each partition (And every sub folder and file) seems to default to ownership so even though some operations work such as moving files around, even adding and deleting, some programs will complain (I found this problem through RapidSVN not working). Nautilus reports that you are not the owner and even if you run Nautilus as root you cannot change to owner to anything other than root. The way I solved this was to install ntfs-config and run with:
sudo ntfs-config
You should now have valid entries in /etc/fstab:
sudo cp /etc/fstab /etc/fstab.bak
gksudo gedit /etc/fstab
Something like this (One for each partition, the ones you are interested in are any with ntfs-3g): UUID=A2D4DF1DD4DEF291 /media/DUMP ntfs-3g defaults,nosuid,nodev,locale=en_AU.UTF-8 0 0
I then edited each ntfs-3g line like so:
UUID=A2D4DF1DD4DEF291 /media/DUMP ntfs-3g defaults,rw,user,uid=500,gid=500,umask=0022,nosuid,nodev,locale=en_AU.UTF-8 0 0
Where uid=youruserid and gid=yourgroupid. You can find these out via System->Administration->Users and Groups (There is probably a command to find this out, actually I would say there is definitely a command for finding this out, but I’m pretty lazy). If you log in with different users perhaps changing to a common group would be better? Reboot to let these settings take effect. If you now to view your partition in Nautilus, File->Properties->Permissions should now list you as the owner with your group.
You now have a pretty well set up Fedora 10 installation. These steps should be pretty similar for future versions. I will probably refer back these when I install Fedora 11 or 12 in a year or two. I love Fedora because it is the total opposite of Windows. With Vista, Microsoft stagnated, waiting a year or two longer than they should have to release a product that by that time was out of touch with the target audience. In contrast, I had been planning to install Fedora 9 this time around after installing 8 only 6-12 months ago, but I was pleasantly surprised to find that 10 had been released. I would also like to try Ubuntu as I haven’t really used it much apart from at work, so I might give that a shot next time. x86_64 has certainly matured over the last 2 or 3 years, I would say it is definitely production server ready and probably has been for at least a year. The quality and variety of packages available for Linux is amazing, the community just keeps on giving. Fedora just keeps on amazing me. The future is bright, I can’t wait.