Git - Revision Control Perfected
Branching and Merging
The work you do in Git is specific to the current branch. A branch is simply a moving reference to a commit (SHA1 object name). Every time you create a new commit, the reference is updated to point to it—this is how Git knows where to find the most recent commit, which is also known as the tip, or head, of the branch.
By default, there is only one branch ("master"), but you can have as many
as you want. You create branches with
git branch and switch between
git checkout. This may seem odd at first, but the reason
it's called "checkout" is that you are "checking
out" the head of that
branch into your working copy. This alters the files in your working
copy to match the commit at the head of the branch.
Branches are super-fast and easy, and they're a great way to try out new ideas, even for trivial things. If you are used to other systems like CVS/SVN, you might have negative thoughts associated with branches—forget all that. Branching and merging are free in Git and can be used without a second thought.
Run the following commands to create and switch to a new local branch named "myidea":
git branch myidea git checkout myidea
All commits now will be tracked in the new branch until you switch to
another. You can work on more than one branch at a time by switching
back and forth between them with
Branches are really useful only because they can be merged back together later. If you decide that you like the changes in myidea, you can merge them back into master:
git checkout master git merge myidea
Unless there are conflicts, this operation will merge all the changes from myidea into your working copy and automatically commit the result to master in one fell swoop. The new commit will have the previous commits from both myidea and master listed as parents.
However, if there are conflicts—places where the same part of a file
was changed differently in each branch—Git will warn you and update
the affected files with "conflict markers" and not commit
the merge automatically. When this happens, it's up to you to edit the files by hand,
make decisions between the versions from each branch, and then remove the
conflict markers. To complete the merge, use
git add on each formerly
conflicted file, and then
After you merge from a branch, you don't need it anymore and can delete it:
git branch -d myidea
If you decide you want to throw myidea away without merging it, use
-D instead of a lowercase
-d as listed above. As a safety
feature, the lowercase switch won't let you delete a branch that hasn't
To list all local branches, simply run:
Git provides a number of tools to examine the history and differences
between commits and branches. Use
git log to view commit histories and
git diff to view the differences between specific commits.
These are text-based tools, but graphical tools also are available, such as
the gitk repository browser, which essentially is a GUI version of
git log --graph to visualize branch history. See Figure 2 for a screenshot.
Figure 2. gitk
Git can merge from a branch in a remote repository simply by transferring needed objects and then running a local merge. Thanks to the content-addressed storage design, Git knows which objects to transfer based on which object names in the new commit are missing from the local repository.
git pull command performs both the transfer step
(the "fetch") and
the merge step together. It accepts the URL of the remote repository (the
"Git URL") and a branch name (or a full "refspec") as arguments. The
Git URL can be a local filesystem path, or an SSH, HTTP, rsync or
Git-specific URL. For instance, this would perform a pull using SSH:
git pull user@host:/some/repo/path master
Git provides some useful mechanisms for setting up relationships with remote repositories and their branches so you don't have to type them out each time. A saved URL of a remote repository is called a "remote", which can be configured along with "tracking branches" to map the remote branches into the local repository.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide