Professional Documents
Culture Documents
http://www.bogotobogo.com/Hadoop/BigData_ha...
bogotobogo
Hadoop 2.6
Installing on
Ubuntu 14.04
(Single-Node
Cluster)
K Hong
google.com/+KHongSanFrancisco
Follow
2,276 followers
24
http://www.bogotobogo.com/Hadoop/BigData_ha...
Installing Java
Hadoop framework is written in Java!!
k@laptop:~$ cd ~
# Update the source list
k@laptop:~$ sudo apt-get update
# The OpenJDK project is the default version of Java
# that is provided from a supported Ubuntu repository.
k@laptop:~$ sudo apt-get install default-jdk
k@laptop:~$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-0ubuntu0.1
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
2 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
Other []:
Is the information correct? [Y/n] Y
Manager 5
CDH5 APIs
QuickStart VMs for
CDH 5.3
QuickStart VMs for
Installing SSH
with wordcount
DB query
Scheduled start
and stop CDH
services
Zookeeper & Kafka
Install
Zookeeper & Kafka
- single node
single broker
cluster
multiple broker
cluster
OLTP vs OLAP
Apache Hadoop
Apache Hadoop
Tutorial II with
CDH - MapReduce
Word Count
Apache Hadoop
Tutorial III with
Create
and
Certicates
Setup
SSH
3 of 23
CDH - MapReduce
Word Count 2
Apache Hadoop
(CDH 5) Hive
Introduction
CDH5 - Hive
from 1.2
Upgrade to 1.3 to
http://www.bogotobogo.com/Hadoop/BigData_ha...
Apache Hadoop :
Creating HBase
table with HBase
Apache Hadoop :
Creating HBase
table with Java API
Apache Hadoop :
HBase in PseudoDistributed mode
k@laptop:~$ su hduser
Apache Hadoop
Password:
HBase : Map,
k@laptop:~$ ssh-keygen -t rsa -P ""
Persistent, Sparse,
Generating public/private rsa key pair.
Sorted, Distributed
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
and
Created directory '/home/hduser/.ssh'.
Multidimensional
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
Apache Hadoop The key fingerprint is:
50:6b:f3:fc:0f:32:bf:30:79:c2:41:71:26:cc:7d:e3 hduser@laptop
Flume with CDH5:
The key's randomart image is:
a single-node
+--[ RSA 2048]----+
Flume deployment
|
.oo.o
|
(telnet example)
|
. .o=. o |
|
. + . o . |
Apache Hadoop
|
o =
E |
(CDH 5) Flume
|
S +
|
with VirtualBox :
|
. +
|
syslog example via
|
O +
|
|
O o
|
NettyAvroRpcClient
|
o.. |
+-----------------+
List of Apache
Hadoop hdfs
commands
Creating
Wordcount Java
Project with
Eclipse Part 1
Apache Hadoop :
Creating
Wordcount Java
Project with
4 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
UnoExample for
CDH5 - local run
Apache Hadoop :
Install Hadoop
Creating
Wordcount Maven
Project with
Eclipse
Oozie workow
with Hue browser CDH 5.3 Hadoop
cluster using
VirtualBox and
QuickStart VM
hduser@laptop:~/hadoop-2.6.0$ sudo mv * /usr/local/hadoop
[sudo] password for hduser:
Spark
1.2 using
hduser is not in the sudoers file. This incident will be
reported
VirtualBox and
QuickStart VM -
Oops!... We got:
wordcount
Spark
Programming
Model : Resilient
Distributed
Dataset (RDD)
Apache Spark 1.3
with PySpark
(Spark Python API)
hduser@laptop:~/hadoop-2.6.0$ su k
Password:
k@laptop:/home/hduser$ sudo adduser hduser sudo
[sudo] password for k:
Adding user `hduser' to group `sudo' ...
Adding user hduser to group sudo
Done.
Shell
Apache Spark 1.2
with PySpark
(Spark Python API)
Wordcount using
CDH5
Apache Spark 1.2
Streaming
Sponsor Open
Source development
activities and free
contents for
everyone.
5 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
Elastic Search,
Logstash,
Kibana
ELK : Elastic Search
ELK : Logstash
with Elastic Search
ELK : Logstash,
hduser@laptop update-alternatives --config java
There is only one alternative in link group java (providing
/usr/b and
ElasticSearch,
Nothing to configure.
Kibana 4
hduser@laptop:~$ vi ~/.bashrc
#HADOOP VARIABLES START
6 of 23
Data
Visualization
Data Visualization
Tools
Data Visualization
http://www.bogotobogo.com/Hadoop/BigData_ha...
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
D3.js
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
DevOps
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
Phases of
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
Continuous
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
Integration
#HADOOP VARIABLES END
Software
development
methodology
Introduction to
DevOps
Samples of
Continuous
Integration (CI) /
Continuous
Artifact repository
cases
and repository
management
Linux - General,
shell
2. /usr/local/hadoop/etc/hadoop/hadoop-env.sh
programming,
processes &
signals ...
RabbitMQ...
MariaDB
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
New Relic APM
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
with NodeJS :
simple agent setup
on AWS instance
Nagios - The
industry standard
in IT infrastructure
monitoring on
Ubuntu
3. /usr/local/hadoop/etc/hadoop/core-site.xml:
7 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
The /usr/local/hadoop/etc/hadoop/core-site.xml le
contains conguration properties that Hadoop uses when
(1) - Linux
Commands
starting up.
This le can be used to override the default settings that
(2) - Networks
(Ruby/Shell
hduser@laptop:~$ sudo mkdir -p /app/hadoop/tmp
/Python)
hduser@laptop:~$ sudo chown hduser:hadoop /app/hadoop/tmp
(5) - Conguration
Management
(6) - AWS VPC
setup
(public/private
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/core-site.xml
subnets with NAT)
<configuration>
(7) - Web server
<property>
<name>hadoop.tmp.dir</name>
(8) - Database
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</descriptio
(9) - Linux System /
</property>
Application
Monitoring,
<property>
Performance
<name>fs.default.name</name>
Tuning, Proling
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI
whose
Methods
& Tools
scheme and authority determine the FileSystem implementation. T
uri's scheme determines the config property (fs.SCHEME.impl)
nam
(10) - Trouble
the FileSystem implementation class. The uri's authority
is
Shooting: use
Load,
determine the host, port, etc. for a filesystem.</description>
Throughput,
</property>
Response time
</configuration>
and Leaks
4. /usr/local/hadoop/etc/hadoop/mapred-site.xml
By default, the /usr/local/hadoop/etc/hadoop/ folder
contains
/usr/local/hadoop/etc/hadoop/mapred-site.xml.template
le which has to be renamed/copied with the name
mapred-site.xml:
hduser@laptop:~$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.t
(16A) - Serving
multiple domains
8 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
(16B) - Serving
multiple domains
using server block
- Nginx
(17) - Linux startup
<configuration>
process
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker ru
at. If "local", then jobs are run in-process as a singleJenkins
map
and reduce task.
</description>
Install
</property>
</configuration>
Conguration -
5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml
Scheduling jobs
The /usr/local/hadoop/etc/hadoop/hdfs-site.xml le
needs to be congured for each host in the cluster that is
Managing_plugins
being used.
It is used to specify the directories which will be used as
Git/GitHub
conguration, and
Fork/Clone
JDK & Maven
setup
Build
conguration for
GitHub Java
application with
Maven
application with
Maven - Console
Output, Updating
Maven
Commit to
changes to GitHub
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
& new test results
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
- Build Failure
Commit to
changes to GitHub
& new test results
9 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
Format the
Filesystem
New
Hadoop
changes to the
repository
Jenkins on EC2 Line Coverage with
JaCoCo plugin
Jenkins Build
Pipeline &
Dependency
Graph Plugins
Jenkins Build Flow
Plugin
hduser@laptop:~$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecat
Instead use the hdfs command for it.
10 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
STARTUP_MSG:
classpath = /usr/local/hadoop/etc/hadoop
Puppet with
...
Amazon AWS II
STARTUP_MSG:
java = 1.7.0_65
(ssh &
************************************************************/
15/04/18 14:43:03 INFO namenode.NameNode: registered UNIX
puppetmaster/puppet
signal h
15/04/18 14:43:03 INFO namenode.NameNode: createNameNode install)
[-format]
15/04/18 14:43:07 WARN util.NativeCodeLoader: Unable to load nativ
Formatting using clusterid: CID-e2f515ac-33da-45bc-8466-5b1100a2bf
Puppet with
15/04/18 14:43:09 INFO namenode.FSNamesystem: No KeyProvider
found
Amazon
AWS III 15/04/18 14:43:09 INFO namenode.FSNamesystem: fsLock is fair:true
Puppet running
15/04/18 14:43:10 INFO blockmanagement.DatanodeManager: dfs.block.
Hello World
15/04/18 14:43:10 INFO blockmanagement.DatanodeManager: dfs.nameno
15/04/18 14:43:10 INFO blockmanagement.BlockManager: dfs.namenode.
15/04/18 14:43:10 INFO blockmanagement.BlockManager: The Puppet
blockCode
del
15/04/18 14:43:10 INFO util.GSet: Computing capacity for Basics
map -Block
15/04/18 14:43:10 INFO util.GSet: VM type
= 64-bit Terminology
15/04/18 14:43:10 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/04/18 14:43:10 INFO util.GSet: capacity
= 2^21 = Puppet
2097152
with e
15/04/18 14:43:10 INFO blockmanagement.BlockManager: dfs.block.acc
Amazon AWS on
15/04/18 14:43:10 INFO blockmanagement.BlockManager: defaultReplic
CentOS 7 (I) 15/04/18 14:43:10 INFO blockmanagement.BlockManager: maxReplicatio
Master setup on
15/04/18 14:43:10 INFO blockmanagement.BlockManager: minReplicatio
EC2
15/04/18 14:43:10 INFO blockmanagement.BlockManager: maxReplicatio
15/04/18 14:43:10 INFO blockmanagement.BlockManager: shouldCheckFo
Puppet with
15/04/18 14:43:10 INFO blockmanagement.BlockManager: replicationRe
Amazon AWS on
15/04/18 14:43:10 INFO blockmanagement.BlockManager: encryptDataTr
15/04/18 14:43:10 INFO blockmanagement.BlockManager: maxNumBlocksT
CentOS 7 (II) 15/04/18 14:43:10 INFO namenode.FSNamesystem: fsOwner
Conguring a
15/04/18 14:43:10 INFO namenode.FSNamesystem: supergroup Puppet Master
15/04/18 14:43:10 INFO namenode.FSNamesystem: isPermissionEnabled
Server with
15/04/18 14:43:10 INFO namenode.FSNamesystem: HA Enabled: false
Passenger and
15/04/18 14:43:10 INFO namenode.FSNamesystem: Append Enabled: true
15/04/18 14:43:11 INFO util.GSet: Computing capacity for Apache
map INode
15/04/18 14:43:11 INFO util.GSet: VM type
= 64-bit
15/04/18 14:43:11 INFO util.GSet: 1.0% max memory 889 MB Puppet
= 8.9master
MB
/agent
ubuntu
15/04/18 14:43:11 INFO util.GSet: capacity
= 2^20 = 1048576
e
15/04/18 14:43:11 INFO namenode.NameNode: Caching file names
occur
14.04 install
on
15/04/18 14:43:11 INFO util.GSet: Computing capacity for EC2
mapnodes
cache
15/04/18 14:43:11 INFO util.GSet: VM type
= 64-bit
15/04/18 14:43:11 INFO util.GSet: 0.25% max memory 889 MB
= 2.2
MB
Puppet
master
15/04/18 14:43:11 INFO util.GSet: capacity
= 2^18 = 262144 en
post install tasks 15/04/18 14:43:11 INFO namenode.FSNamesystem: dfs.namenode.safemod
master's names
15/04/18 14:43:11 INFO namenode.FSNamesystem: dfs.namenode.safemod
and certicates
15/04/18 14:43:11 INFO namenode.FSNamesystem: dfs.namenode.safemod
setup,
15/04/18 14:43:11 INFO namenode.FSNamesystem: Retry cache
on namen
15/04/18 14:43:11 INFO namenode.FSNamesystem: Retry cache will use
agent post
15/04/18 14:43:11 INFO util.GSet: Computing capacity for Puppet
map NameN
15/04/18 14:43:11 INFO util.GSet: VM type
= 64-bit install tasks 15/04/18 14:43:11 INFO util.GSet: 0.029999999329447746% max
memory
congure
agent,
15/04/18 14:43:11 INFO util.GSet: capacity
= 2^15 = hostnames,
32768 ent
and
15/04/18 14:43:11 INFO namenode.NNConf: ACLs enabled? false
sign request
15/04/18 14:43:11 INFO namenode.NNConf: XAttrs enabled? true
15/04/18 14:43:11 INFO namenode.NNConf: Maximum size of an xattr:
EC2 Puppet
15/04/18 14:43:12 INFO namenode.FSImage: Allocated new BlockPoolId
15/04/18 14:43:12 INFO common.Storage: Storage directory master/agent
/usr/loca
basic
tasks - t
main
15/04/18 14:43:12 INFO namenode.NNStorageRetentionManager:
Going
15/04/18 14:43:12 INFO util.ExitUtil: Exiting with status
manifest
0
with a le
15/04/18 14:43:12 INFO namenode.NameNode: SHUTDOWN_MSG: resource/module
/************************************************************
and immediate
11 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
Starting Hadoop
module
Puppet variable
Puppet packages,
services, and les
Puppet packages,
k@laptop:~$ cd /usr/local/hadoop/sbin
k@laptop:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh
start-all.cmd
hadoop-daemon.sh
start-all.sh
hadoop-daemons.sh
start-balancer.sh
hdfs-config.cmd
start-dfs.cmd
hdfs-config.sh
start-dfs.sh
httpfs.sh
start-secure-dns.sh
kms.sh
start-yarn.cmd
mr-jobhistory-daemon.sh start-yarn.sh
refresh-namenodes.sh
stop-all.cmd
slaves.sh
stop-all.sh
scope
stop-balancer.sh
Puppet templates
stop-dfs.cmd
stop-dfs.sh
Puppet creating
stop-secure-dns.sh
stop-yarn.cmd
and managing
stop-yarn.sh
user accounts with
yarn-daemon.sh
SSH access
yarn-daemons.sh
Puppet Locking
user accounts &
deploying sudoers
le
hduser@laptop:/usr/local/hadoop/sbin$ start-all.sh
Puppet exec
hduser@laptop:~$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn
resource
15/04/18 16:43:13 WARN util.NativeCodeLoader: Unable to load nativ
Starting namenodes on [localhost]
Puppet classes
localhost: starting namenode, logging to /usr/local/hadoop/logs/ha
and modules
localhost: starting datanode, logging to /usr/local/hadoop/logs/ha
Starting secondary namenodes [0.0.0.0]
Puppet Forge
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/
modules
15/04/18 16:43:58 WARN util.NativeCodeLoader: Unable to load nativ
starting yarn daemons
Puppet Express
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-h
localhost: starting nodemanager, logging to /usr/local/hadoop/logs
Puppet Express 2
Puppet 4 :
12 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
Changes
Puppet
hduser@laptop:/usr/local/hadoop/sbin$ jps
9026 NodeManager
7348 NameNode
9766 Jps
8887 ResourceManager
7507 DataNode
--congprint
Puppet with
Docker
Chef
The output means that we now have a functional instance
of Hadoop running on our VPS (Virtual private server).
Another way to check is using netstat:
What is Chef?
Chef install on
Ubuntu 14.04 Local Workstation
via omnibus
Stopping Hadoop
Bootstrapping a
node on EC2
ubuntu 14.04
$ pwd
/usr/local/hadoop/sbin
$ ls
distribute-exclude.sh
hadoop-daemon.sh
hadoop-daemons.sh
hdfs-config.cmd
hdfs-config.sh
Knife
httpfs.sh
mr-jobhistory-daemon.sh
refresh-namenodes.sh
slaves.sh
start-all.cmd
Docker 1.7
start-all.sh
start-balancer.sh
start-dfs.cmd
Docker install on
start-dfs.sh
Amazon Linux AMI
start-secure-dns.s
Docker install on
13 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
Docker container
vs Virtual Machine
hduser@laptop:/usr/local/hadoop/sbin$ pwd
Docker install on
/usr/local/hadoop/sbin
Ubuntu 14.04
hduser@laptop:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh httpfs.sh
start-all.cmd
Docker Hello
hadoop-daemon.sh
kms.sh
start-all.sh
World Application
hadoop-daemons.sh
mr-jobhistory-daemon.sh start-balancer.sh
hdfs-config.cmd
refresh-namenodes.sh
start-dfs.cmd
Docker image and
hdfs-config.sh
slaves.sh
start-dfs.sh
container via
hduser@laptop:/usr/local/hadoop/sbin$
docker commands
hduser@laptop:/usr/local/hadoop/sbin$ stop-all.sh
(search, pull, run,
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.s
15/04/18 15:46:31 WARN util.NativeCodeLoader: Unable to load
nativ
ps, restart,
attach,
Stopping namenodes on [localhost]
and rm)
localhost: stopping namenode
localhost: stopping datanode
More on docker
Stopping secondary namenodes [0.0.0.0]
run command
0.0.0.0: no secondarynamenode to stop
(docker run -it,
15/04/18 15:46:59 WARN util.NativeCodeLoader: Unable to load nativ
docker run --rm,
stopping yarn daemons
etc.)
stopping resourcemanager
localhost: stopping nodemanager
File sharing
no proxyserver to stop
between host and
container (docker
run -d -p -v)
Linking containers
and volume for
datastore
Let's start the Hadoop again and see its Web UI:
Docker images
Dockerle - Build
automatically I FROM,
hduser@laptop:/usr/local/hadoop/sbin$ start-all.sh
MAINTAINER, and
build context
Dockerle - Build
Docker images
14 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
CMD
Dockerle - Build
Docker images
automatically V WORKDIR, ENV,
ADD, and
ENTRYPOINT
Installing LAMP via
puppet on Docker
Docker install via
Puppet
Vagrant
VirtualBox &
Vagrant install on
Ubuntu 14.04
Creating a
VirtualBox using
Vagrant
Provisioning
Networking - Port
Forwarding
Vagrant Share
Vagrant Rebuild &
Teardown
AWS (Amazon
Web Services)
AWS : Creating a
snapshot (cloning
an image)
AWS : Attaching
Amazon EBS
volume to an
instance
AWS : Creating an
EC2 instance and
15 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
attaching Amazon
EBS volume to the
instance using
Python boto
module with User
data
AWS : Creating an
instance to a new
region by copying
an AMI
AWS : S3 (Simple
Storage Service) 1
AWS : S3 (Simple
Storage Service) 2 Creating and
Deleting a Bucket
AWS : S3 (Simple
Storage Service) 3 Bucket Versioning
AWS : S3 (Simple
Storage Service) 4 Uploading a large
le
AWS : S3 (Simple
Storage Service) 5 Uploading
folders/les
recursively
SecondaryNameNode
AWS : S3 (Simple
Storage Service) 6 Bucket Policy for
File/Folder
View/Download
AWS : S3 (Simple
Storage Service) 7 How to Copy or
Move Objects
from one region to
another
AWS : S3 (Simple
Storage Service) 8 Archiving S3 Data
to Glacier
AWS : Creating a
16 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
CloudFront
distribution with
an Amazon S3
origin
AWS : ELB (Elastic
Load Balancer) :
listeners, health
checks
AWS : Load
Balancing with
HAProxy (High
Availability Proxy)
AWS : VirtualBox
on EC2
AWS : NTP setup
on EC2
DataNode
17 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
OpenVPN
AWS : Autoscaling
group (ASG)
AWS : Adding a
SSH User Account
on Linux Instance
AWS : Windows
Servers - Remote
Desktop
Connections using
RDP
AWS : Scheduled
stopping and
starting an
instance - python
& cron
AWS : Elastic
Beanstalk with
NodeJS
AWS : Detecting
stopped instance
and sending an
alert email using
Mandrill smtp
AWS : Identity and
Access
Management
(IAM) Roles for
Amazon EC2
AWS : Amazon
Route 53
AWS : Amazon
Route 53 - DNS
(Domain Name
Server) setup
AWS : Redshift
data warehouse
AWS : RDS
Connecting to a
DB Instance
Running the SQL
Server Database
Engine
18 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
AWS : RDS
Importing and
Exporting SQL
Server Data
AWS : RDS
PostgreSQL &
pgAdmin III
AWS : RDS
PostgreSQL 2 Creating/Deleting
a Table
AWS : MySQL
Replication :
Master-slave
AWS : MySQL
backup & restore
AWS RDS : CrossRegion Read
Replicas for
MySQL and
Snapshots for
PostgreSQL
AWS : Restoring
Postgres on EC2
instance from S3
backup
Bogotobogo's contents
To see more items, click left or right arrow.
Redis
Redis vs
Memcached
Redis 3.0.1 Install
Powershell 4
Tutorial
I hope this site is informative and helpful.
Powersehll :
Introduction
Powersehll : Help
System
19 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
Powersehll :
Using Hadoop
Running
commands
Big
Data
Tutorials
Providers
Powersehll :
&
Hadoop
Pipeline
Powersehll :
Objects
Powersehll :
Remote Control
Windows
Management
Instrumentation
(WMI)
How to Enable
CDH5 APIs
Windows 2012
Multiple RDP
Sessions in
Server
server on IIS 8 in
20 of 23
Powersehll :
congure FTP
Windows 2012
Server
How to Run Exe as
a Service on
Windows 2012
Server
SQL Inner, Left,
Right, and Outer
Joins
Git/GitHub
Tutorial
One page express
tutorial for GIT
and GitHub
Installation
http://www.bogotobogo.com/Hadoop/BigData_ha...
add/status/log
commit and di
--amend
git commit
Deleting and
Renaming les
Undoing Things :
File Checkout &
Unstaging
Reverting commit
Soft Reset - (git
reset --soft <SHA
key>)
Mixed Reset Default
Hard Reset - (git
reset --hard <SHA
key>)
Creating &
switching
Branches
merge
Rebase &
Merge conicts
Fast-forward
Three-way merge
with a simple
example
GitHub Account
and SSH
Uploading to
GitHub
GUI
Branching &
Merging
Merging conicts
GIT on Ubuntu
21 of 23
1 Comment
Recommend
http://www.bogotobogo.com/Hadoop/BigData_ha...
bogotobogo.com
Share
Login
Sort by Best
and OS X Focused on
Branching
Setting up a
remote repository
/ pushing local
Priya Dandekar
project and
13 days ago
Thanks for your tutorial. I had difficult time following the official
documentation on apache website. This seems to be exactly what I was
looking for.
Do you recommend creating a new user hdfs for all installations ? or using
a default admin user will be fine? I have older version of ubuntu on my
lenovo machine and your steps are fine so far.
Also please recommend a good book for beginners perspective. I am new to
learning big data and looking for knowing all aspects of it. Some big data
books are here - http://www.fromdev.com/2014/07... however I am not sure
which to pick. What is your recommendation for beginners?
Reply Share
WHAT'S THIS?
ALSO ON BOGOTOBOGO.COM
SourceTree II :
Branching &
Merging
Git/GitHub via
SourceTree III : Git
Work Flow
Git/GitHub via
SourceTree IV : Git
AvatarMinyoung Jun
? The site
Reset
Subversion
Subversion Install
On Ubuntu 14.04
Subversion
creating and
accessing I
Subversion
creating and
accessing II
22 of 23
http://www.bogotobogo.com/Hadoop/BigData_ha...
Zookeeper & K
- 2016
Search
Custom Search
23 of 23