How do I dual boot Windows XP and Fedora Care 8 with one HD
I have been trying to dual boot Windows XP pro and Fedora Core 8. I have created two partitions on ONE hard drive. I installed Windows XP Pro on one partition. When I install Fedora Care 8 on the second partition nothing happens. I installed GRUB and backed up the boot loader for my Windows XP Pro OS. I still can't get it to dual boot. Is there any detail instruction on how to do a dual boot with one hard drive? I have one computer with two hard drives with the dual boot running fine. I need this computer with the one 40GB hard drive to be dual boot as well. Is there any way to do this with out having to buy a second HD for this computer?
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Help with Designing or Debugging CORBA Applications
- Returning Values from Bash Functions
- Linux Systems Administrator
- Welcome to 1998
25 min 36 sec ago
- notifier shortcomings
49 min 18 sec ago
2 hours 26 min ago
- Android User
2 hours 27 min ago
- Reply to comment | Linux Journal
4 hours 20 min ago
7 hours 10 min ago
- This is a good post. This
12 hours 23 min ago
- Great, This is really amazing
12 hours 25 min ago
- These posts are really good
12 hours 26 min ago
- It’s a really great site you
12 hours 29 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?