Git - Revision Control Perfected
A remote named "origin" is configured automatically when a repository
is created using
git clone. Consider a clone of Linus Torvald's Kernel
Tree mirrored on GitHub:
git clone https://github.com/mirrors/linux-2.6.git
If you look inside the new repository's config file (.git/config), you'll see these lines set up:
[remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = https://github.com/mirrors/linux-2.6.git [branch "master"] remote = origin merge = refs/heads/master
The fetch line above defines the remote tracking branches. This "refspec" specifies that all branches in the remote repository under "refs/heads" (the default path for branches) should be transferred to the local repository under "refs/remotes/origin". For example, the remote branch named "master" will become a tracking branch named "origin/master" in the local repository.
The lines under the branch section provide defaults—specific to the
master branch in this example—so that
git pull can be called with
no arguments to fetch and merge from the remote master branch into the
local master branch.
git pull command is actually a combination of the
git fetch and
git merge commands. If you do a
fetch instead, the tracking branches
will be updated and you can compare them to see what changed. Then you
can merge as a separate step:
git merge origin/master
Git also provides the
git push command for uploading to a remote
repository. The push operation is essentially the inverse of the pull
operation, but since it won't do a remote "checkout" operation, it is
usually used with "bare" repositories. A bare repository is just the
git database without a working copy. It is most useful for servers where
there is no reason to have editable files checked out.
git push will allow only a
"fast-forward" merge where the
local commits derive from the remote head. If the local head and remote
head have both changed, you must perform a full merge (which will create a
new commit deriving from both heads). Full merges must be done locally,
so all this really means is you must call
if someone else committed something first.
This article is meant only to provide an introduction to some of Git's most basic features and usage. Git is incredibly powerful and has a lot more capabilities beyond what I had space to cover here. But, once you realize all the features are based on the same core concepts, it becomes straightforward to learn the rest.
Check out the Resources section for some sites where you can learn more. Also, don't forget to read the git man page.
Git Home Page: http://git-scm.com
Git Community Book: http://book.git-scm.com
Why Git Is Better Than X: http://whygitisbetterthanx.com
Google Tech Talk: Linus Torvalds on Git: http://www.youtube.com/watch?v=4XpnKHJAok8
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
1 min 2 sec ago
3 min 17 sec ago
47 min 3 sec ago
49 min 35 sec ago
51 min 45 sec ago
- Bought photoshop CS5 for developing a website :(
4 hours 4 min ago
- What the author describes
5 hours 30 min ago
- Reply to comment | Linux Journal
9 hours 40 min ago
- Reply to comment | Linux Journal
10 hours 25 min ago
- Didn't read
10 hours 36 min ago
Free Webinar: Hadoop
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?