[转发]hadoop 默认参数
转发:http://myext.cn/other/56013.html
1 获取默认配置
配置hadoop,主要是配置core-site.xml,hdfs-site.xml,mapred-site.xml三个配置文件,默认下来,这些配置文件都是空的,所以很难知道这些配置文件有哪些配置可以生效,上网找的配置可能因为各个hadoop版本不同,导致无法生效。浏览更多的配置,有两个方法:
1.选择相应版本的hadoop,下载解压后,搜索*.xml,找到core-default.xml,hdfs-default.xml,mapred-default.xml,这些就是默认配置,可以参考这些配置的说明和key,配置hadoop集群。
2.浏览apache官网,三个配置文件链接如下:
http://hadoop.apache.org/common/docs/current/core-default.html
http://hadoop.apache.org/common/docs/current/hdfs-default.html
http://hadoop.apache.org/common/docs/current/mapred-default.html
这里是浏览hadoop当前版本号的默认配置文件,其他版本号,要另外去官网找。其中第一个方法找到默认的配置是最好的,因为每个属性都有说明,可以直接使用。另外,core-site.xml是全局配置,hdfs-site.xml和mapred-site.xml分别是hdfs和mapred的局部配置。
2 常用的端口配置
2.1 HDFS端口
参数
描述
默认
配置文件
例子值
fs.default.name namenode
namenode RPC交互端口
8020
core-site.xml
hdfs://master:8020/
dfs.http.address
NameNode web管理端口
50070
hdfs- site.xml
0.0.0.0:50070
dfs.datanode.address
datanode 控制端口
50010
hdfs -site.xml
0.0.0.0:50010
dfs.datanode.ipc.address
datanode的RPC服务器地址和端口
50020
hdfs-site.xml
0.0.0.0:50020
dfs.datanode.http.address
datanode的HTTP服务器和端口
50075
hdfs-site.xml
0.0.0.0:50075
2.2 MR端口
参数
描述
默认
配置文件
例子值
mapred.job.tracker
job-tracker交互端口
8021
mapred-site.xml
hdfs://master:8021/
job
tracker的web管理端口
50030
mapred-site.xml
0.0.0.0:50030
mapred.task.tracker.http.address
task-tracker的HTTP端口
50060
mapred-site.xml
0.0.0.0:50060
2.3 其它端口
参数
描述
默认
配置文件
例子值
dfs.secondary.http.address
secondary NameNode web管理端口
50090
hdfs-site.xml
0.0.0.0:50090
3 三个缺省配置参考文件说明
3.1 core-default.html
序号
参数名
参数值
参数说明
1
hadoop.tmp.dir
/tmp/hadoop-${user.name}
临时目录设定
2
hadoop.native.lib
true
使用本地hadoop库标识。
3
hadoop.http.filter.initializers
http服务器过滤链设置
4
hadoop.security.group.mapping
org.apache.hadoop.security.ShellBasedUnixGroupsMapping
组内用户的列表的类设定
5
hadoop.security.authorization
false
服务端认证开启
6
hadoop.security.authentication
simple
无认证或认证设置
7
hadoop.security.token.service.use_ip
true
是否开启使用IP地址作为连接的开关
8
hadoop.logfile.size
10000000
日志文件最大为10M
9
hadoop.logfile.count
10
日志文件数量为10个
10
io.file.buffer.size
4096
流文件的缓冲区为4K
11
io.bytes.per.checksum
512
校验位数为512字节
12
io.skip.checksum.errors
false
校验出错后是抛出异常还是略过标识。True则略过。
13
io.compression.codecs
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec
压缩和解压的方式设置
14
io.serializations
org.apache.hadoop.io.serializer.WritableSerialization
序例化和反序列化的类设定
15
fs.default.name
file:///
缺省的文件URI 标识设定。
16
fs.trash.interval
0
文件废弃标识设定,0为禁止此功能
17
fs.file.impl
org.apache.hadoop.fs.LocalFileSystem
本地文件操作类设置
18
fs.hdfs.impl
org.apache.hadoop.hdfs.DistributedFileSystem
HDFS文件操作类设置
19
fs.s3.impl
org.apache.hadoop.fs.s3.S3FileSystem
S3文件操作类设置
20
fs.s3n.impl
org.apache.hadoop.fs.s3native.NativeS3FileSystem
S3文件本地操作类设置
21
fs.kfs.impl
org.apache.hadoop.fs.kfs.KosmosFileSystem
KFS文件操作类设置.
22
fs.hftp.impl
org.apache.hadoop.hdfs.HftpFileSystem
HTTP方式操作文件设置
23
fs.hsftp.impl
org.apache.hadoop.hdfs.HsftpFileSystem
HTTPS方式操作文件设置
24
fs.webhdfs.impl
org.apache.hadoop.hdfs.web.WebHdfsFileSystem
WEB方式操作文件类设置
25
fs.ftp.impl
org.apache.hadoop.fs.ftp.FTPFileSystem
FTP文件操作类设置
26
fs.ramfs.impl
org.apache.hadoop.fs.InMemoryFileSystem
内存文件操作类设置
27
fs.har.impl
org.apache.hadoop.fs.HarFileSystem
压缩文件操作类设置.
28
fs.har.impl.disable.cache
true
是否缓存har文件的标识设定
29
fs.checkpoint.dir
${hadoop.tmp.dir}/dfs/namesecondary
备份名称节点的存放目前录设置
30
fs.checkpoint.edits.dir
${fs.checkpoint.dir}
备份名称节点日志文件的存放目前录设置
31
fs.checkpoint.period
3600
动态检查的间隔时间设置
32
fs.checkpoint.size
67108864
日志文件大小为64M
33
fs.s3.block.size
67108864
写S3文件系统的块的大小为64M
34
fs.s3.buffer.dir
${hadoop.tmp.dir}/s3
S3文件数据的本地存放目录
35
fs.s3.maxRetries
4
S3文件数据的偿试读写次数
36
fs.s3.sleepTimeSeconds
10
S3文件偿试的间隔
37
local.cache.size
10737418240
缓存大小设置为10GB
38
io.seqfile.compress.blocksize
1000000
压缩流式文件中的最小块数为100万
39
io.seqfile.lazydecompress
true
块是否需要压缩标识设定
40
io.seqfile.sorter.recordlimit
1000000
内存中排序记录块类最小为100万
41
io.mapfile.bloom.size
1048576
BloomMapFiler过滤量为1M
42
io.mapfile.bloom.error.rate
0.005
43
hadoop.util.hash.type
murmur
缺少hash方法为murmur
44
ipc.client.idlethreshold
4000
连接数据最小阀值为4000
45
ipc.client.kill.max
10
一个客户端连接数最大值为10
46
ipc.client.connection.maxidletime
10000
断开与服务器连接的时间最大为10秒
47
ipc.client.connect.max.retries
10
建立与服务器连接的重试次数为10次
48
ipc.server.listen.queue.size
128
接收客户连接的监听队例的长度为128
49
ipc.server.tcpnodelay
false
开启或关闭服务器端TCP连接算法
50
ipc.client.tcpnodelay
false
开启或关闭客户端TCP连接算法
51
webinterface.private.actions
false
Web交互的行为设定
52
hadoop.rpc.socket.factory.class.default
org.apache.hadoop.net.StandardSocketFactory
缺省的socket工厂类设置
53
hadoop.rpc.socket.factory.class.ClientProtocol
与dfs连接时的缺省socket工厂类
54
hadoop.socks.server
服务端的工厂类缺省设置为SocksSocketFactory.
55
topology.node.switch.mapping.impl
org.apache.hadoop.net.ScriptBasedMapping
56
topology.script.file.name
57
topology.script.number.args
100
参数数量最多为100
58
hadoop.security.uid.cache.secs
14400
3.2 hdfs-default.html
序号
参数名
参数值
参数说明
1
dfs.namenode.logging.level
info
The logging level for dfs namenode. Other values are "dir"(trac e namespace mutations), "block"(trace block under/over replications and block creations/deletions), or "all".
2
dfs.secondary.http.address
0.0.0.0:50090
The secondary namenode http server address and port. If the port is 0 then the server will start on a free port.
3
dfs.datanode.address
0.0.0.0:50010
The address where the datanode server will listen to. If the port is 0 then the server will start on a free port.
4
dfs.datanode.http.address
0.0.0.0:50075
The datanode http server address and port. If the port is 0 then the server will start on a free port.
5
dfs.datanode.ipc.address
0.0.0.0:50020
The datanode ipc server address and port. If the port is 0 then the server will start on a free port.
6
dfs.datanode.handler.count
3
The number of server threads for the datanode.
7
dfs.http.address
0.0.0.0:50070
The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port.
8
dfs.https.enable
false
Decide if HTTPS(SSL) is supported on HDFS
9
dfs.https.need.client.auth
false
Whether SSL client certificate authentication is required
10
dfs.https.server.keystore.resource
ssl-server.xml
Resource file from which ssl server keystore information will be extracted
11
dfs.https.client.keystore.resource
ssl-client.xml
Resource file from which ssl client keystore information will be extracted
12
dfs.datanode.https.address
0.0.0.0:50475
13
dfs.https.address
0.0.0.0:50470
14
dfs.datanode.dns.interface
default
The name of the Network Interface from which a data node should report its IP address.
15
dfs.datanode.dns.nameserver
default
The host name or IP address of the name server (DNS) which a DataNode should use to determine the host name used by the NameNode for communication and display purposes.
16
dfs.replication.considerLoad
true
Decide if chooseTarget considers the target's load or not
17
dfs.default.chunk.view.size
32768
The number of bytes to view for a file on the browser.
18
dfs.datanode.du.reserved
0
Reserved space in bytes per volume. Always leave this much space free for non dfs use.
19
dfs.name.dir
${hadoop.tmp.dir}/dfs/name
Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
20
dfs.name.edits.dir
${dfs.name.dir}
Determines where on the local filesystem the DFS name node should store the transaction (edits) file. If this is a comma-delimited list of directories then the transaction file is replicated in all of the directories, for redundancy. Default value is same as dfs.name.dir
21
dfs.web.ugi
webuser,webgroup
The user account used by the web interface. Syntax: USERNAME,GROUP1,GROUP2, ...
22
dfs.permissions
true
If "true", enable permission checking in HDFS. If "false", permission checking is turned off, but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories.
23
dfs.permissions.supergroup
supergroup
The name of the group of super-users.
24
dfs.block.access.token.enable
false
If "true", access tokens are used as capabilities for accessing datanodes. If "false", no access tokens are checked on accessing datanodes.
25
dfs.block.access.key.update.interval
600
Interval in minutes at which namenode updates its access keys.
26
dfs.block.access.token.lifetime
600
The lifetime of access tokens in minutes.
27
dfs.data.dir
${hadoop.tmp.dir}/dfs/data
Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.
28
dfs.datanode.data.dir.perm
755
Permissions for the directories on on the local filesystem where the DFS data node store its blocks. The permissions can either be octal or symbolic.
29
dfs.replication
3
Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
30
dfs.replication.max
512
Maximal block replication.
31
dfs.replication.min
1
Minimal block replication.
32
dfs.block.size
67108864
The default block size for new files.
33
dfs.df.interval
60000
Disk usage statistics refresh interval in msec.
34
dfs.client.block.write.retries
3
The number of retries for writing blocks to the data nodes, before we signal failure to the application.
35
dfs.blockreport.intervalMsec
3600000
Determines block reporting interval in milliseconds.
36
dfs.blockreport.initialDelay
0
Delay for first block report in seconds.
37
dfs.heartbeat.interval
3
Determines datanode heartbeat interval in seconds.
38
dfs.namenode.handler.count
10
The number of server threads for the namenode.
39
dfs.safemode.threshold.pct
0.999f
Specifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.replication.min. Values less than or equal to 0 mean not to start in safe mode. Values greater than 1 will make safe mode permanent.
Determines extension of safe mode in milliseconds after the threshold level is reached.
40
dfs.safemode.extension
30000
Determines extension of safe mode in milliseconds after the threshold level is reached.
41
dfs.balance.bandwidthPerSec
1048576
Specifies the maximum amount of bandwidth that each datanode can utilize for the balancing purpose in term of the number of bytes per second.
42
dfs.hosts
Names a file that contains a list of hosts that are permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, all hosts are permitted.
43
dfs.hosts.exclude
Names a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded.
44
dfs.max.objects
0
The maximum number of files, directories and blocks dfs supports. A value of zero indicates no limit to the number of objects that dfs supports.
45
dfs.namenode.decommission.interval
30
Namenode periodicity in seconds to check if decommission is complete.
46
dfs.namenode.decommission.nodes.per.interval
5
The number of nodes namenode checks if decommission is complete in each dfs.namenode.decommission.interval.
47
dfs.replication.interval
3
The periodicity in seconds with which the namenode computes repliaction work for datanodes.
48
dfs.access.time.precision
3600000
The access time for HDFS file is precise upto this value. The default value is 1 hour. Setting a value of 0 disables access times for HDFS.
49
dfs.support.append
false
Does HDFS allow appends to files? This is currently set to false because there are bugs in the "append code" and is not supported in any prodction cluster.
50
dfs.namenode.delegation.key.update-interval
86400000
The update interval for master key for delegation tokens in the namenode in milliseconds.
51
dfs.namenode.delegation.token.max-lifetime
604800000
The maximum lifetime in milliseconds for which a delegation token is valid.
52
dfs.namenode.delegation.token.renew-interval
86400000
The renewal interval for delegation token in milliseconds.
53
dfs.datanode.failed.volumes.tolerated
0
The number of volumes that are allowed to fail before a datanode stops offering service. By default any volume failure will cause a datanode to shutdown.
3.3 mapred-default.html
序号
参数名
参数值
参数说明
1
hadoop.job.history.location
If job tracker is static the history files are stored in this single well known place. If No value is set here, by default, it is in the local file system at ${hadoop.log.dir}/history.
2
hadoop.job.history.user.location
User can specify a location to store the history files of a particular job. If nothing is specified, the logs are stored in output directory. The files are stored in "_logs/history/" in the directory. User can stop logging by giving the value "none".
3
mapred.job.tracker.history.completed.location
The completed job history files are stored at this single well known location. If nothing is specified, the files are stored at ${hadoop.job.history.location}/done.
4
io.sort.factor
10
The number of streams to merge at once while sorting files. This determines the number of open file handles.
5
io.sort.mb
100
The total amount of buffer memory to use while sorting files, in megabytes. By default, gives each merge stream 1MB, which should minimize seeks.
6
io.sort.record.percent
0.05
The percentage of io.sort.mb dedicated to tracking record boundaries. Let this value be r, io.sort.mb be x. The maximum number of records collected before the collection thread must block is equal to (r * x) / 4
7
io.sort.spill.percent
0.80
The soft limit in either the buffer or record collection buffers. Once reached, a thread will begin to spill the contents to disk in the background. Note that this does not imply any chunking of data to the spill. A value less than 0.5 is not recommended.
8
io.map.index.skip
0
Number of index entries to skip between each entry. Zero by default. Setting this to values larger than zero can facilitate opening large map files using less memory.
9
mapred.job.tracker
local
The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.
10
mapred.job.tracker.http.address
0.0.0.0:50030
The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port.
11
mapred.job.tracker.handler.count
10
The number of server threads for the JobTracker. This should be roughly 4% of the number of tasktracker nodes.
12
mapred.task.tracker.report.address
127.0.0.1:0
The interface and port that task tracker server listens on. Since it is only connected to by the tasks, it uses the local interface. EXPERT ONLY. Should only be changed if your host does not have the loopback interface.
13
mapred.local.dir
${hadoop.tmp.dir}/mapred/local
The local directory where MapReduce stores intermediate data files. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored.
14
mapred.system.dir
${hadoop.tmp.dir}/mapred/system
The directory where MapReduce stores control files.
15
mapreduce.jobtracker.staging.root.dir
${hadoop.tmp.dir}/mapred/staging
The root of the staging area for users' job files In practice, this should be the directory where users' home directories are located (usually /user)
16
mapred.temp.dir
${hadoop.tmp.dir}/mapred/temp
A shared directory for temporary files.
17
mapred.local.dir.minspacestart
0
If the space in mapred.local.dir drops under this, do not ask for more tasks. Value in bytes.
18
mapred.local.dir.minspacekill
0
If the space in mapred.local.dir drops under this, do not ask more tasks until all the current ones have finished and cleaned up. Also, to save the rest of the tasks we have running, kill one of them, to clean up some space. Start with the reduce tasks, then go with the ones that have finished the least. Value in bytes.
19
mapred.tasktracker.expiry.interval
600000
Expert: The time-interval, in miliseconds, after which a tasktracker is declared 'lost' if it doesn't send heartbeats.
20
mapred.tasktracker.resourcecalculatorplugin
Name of the class whose instance will be used to query resource information on the tasktracker. The class must be an instance of org.apache.hadoop.util.ResourceCalculatorPlugin. If the value is null, the tasktracker attempts to use a class appropriate to the platform. Currently, the only platform supported is Linux.
21
mapred.tasktracker.taskmemorymanager.monitoring-interval
5000
The interval, in milliseconds, for which the tasktracker waits between two cycles of monitoring its tasks' memory usage. Used only if tasks' memory management is enabled via mapred.tasktracker.tasks.maxmemory.
22
mapred.tasktracker.tasks.sleeptime-before-sigkill
5000
The time, in milliseconds, the tasktracker waits for sending a SIGKILL to a process, after it has been sent a SIGTERM.
23
mapred.map.tasks
2
The default number of map tasks per job. Ignored when mapred.job.tracker is "local".
24
mapred.reduce.tasks
1
The default number of reduce tasks per job. Typically set to 99% of the cluster's reduce capacity, so that if a node fails the reduces can still be executed in a single wave. Ignored when mapred.job.tracker is "local".
25
mapreduce.tasktracker.outofband.heartbeat
false
Expert: Set this to true to let the tasktracker send an out-of-band heartbeat on task-completion for better latency.
26
mapreduce.tasktracker.outofband.heartbeat.damper
1000000
When out-of-band heartbeats are enabled, provides damping to avoid overwhelming the JobTracker if too many out-of-band heartbeats would occur. The damping is calculated such that the heartbeat interval is divided by (T*D + 1) where T is the number of completed tasks and D is the damper value. Setting this to a high value like the default provides no damping -- as soon as any task finishes, a heartbeat will be sent. Setting this parameter to 0 is equivalent to disabling the out-of-band heartbeat feature. A value of 1 would indicate that, after one task has completed, the time to wait before the next heartbeat would be 1/2 the usual time. After two tasks have finished, it would be 1/3 the usual time, etc.
27
mapred.jobtracker.restart.recover
false
"true" to enable (job) recovery upon restart, "false" to start afresh
28
mapred.jobtracker.job.history.block.size
3145728
The block size of the job history file. Since the job recovery uses job history, its important to dump job history to disk as soon as possible. Note that this is an expert level parameter. The default value is set to 3 MB.
29
mapreduce.job.split.metainfo.maxsize
10000000
The maximum permissible size of the split metainfo file. The JobTracker won't attempt to read split metainfo files bigger than the configured value. No limits if set to -1.
30
mapred.jobtracker.taskScheduler
org.apache.hadoop.mapred.JobQueueTaskScheduler
The class responsible for scheduling the tasks.
31
mapred.jobtracker.taskScheduler.maxRunningTasksPerJob
The maximum number of running tasks for a job before it gets preempted. No limits if undefined.
32
mapred.map.max.attempts
4
Expert: The maximum number of attempts per map task. In other words, framework will try to execute a map task these many number of times before giving up on it.
33
mapred.reduce.max.attempts
4
Expert: The maximum number of attempts per reduce task. In other words, framework will try to execute a reduce task these many number of times before giving up on it.
34
mapred.reduce.parallel.copies
5
The default number of parallel transfers run by reduce during the copy(shuffle) phase.
35
mapreduce.reduce.shuffle.maxfetchfailures
10
The maximum number of times a reducer tries to fetch a map output before it reports it.
36
mapreduce.reduce.shuffle.connect.timeout
180000
Expert: The maximum amount of time (in milli seconds) a reduce task spends in trying to connect to a tasktracker for getting map output.
37
mapreduce.reduce.shuffle.read.timeout
180000
Expert: The maximum amount of time (in milli seconds) a reduce task waits for map output data to be available for reading after obtaining connection.
38
mapred.task.timeout
600000
The number of milliseconds before a task will be terminated if it neither reads an input, writes an output, nor updates its status string.
39
mapred.tasktracker.map.tasks.maximum
2
The maximum number of map tasks that will be run simultaneously by a task tracker.
40
mapred.tasktracker.reduce.tasks.maximum
2
The maximum number of reduce tasks that will be run simultaneously by a task tracker.
41
mapred.jobtracker.completeuserjobs.maximum
100
The maximum number of complete jobs per user to keep around before delegating them to the job history.
42
mapreduce.reduce.input.limit
-1
The limit on the input size of the reduce. If the estimated input size of the reduce is greater than this value, job is failed. A value of -1 means that there is no limit set.
43
mapred.job.tracker.retiredjobs.cache.size
1000
The number of retired job status to keep in the cache.
44
mapred.job.tracker.jobhistory.lru.cache.size
5
The number of job history files loaded in memory. The jobs are loaded when they are first accessed. The cache is cleared based on LRU.
45
mapred.child.java.opts
-Xmx200m
Java opts for the task tracker child processes. The following symbol, if present, will be interpolated: @taskid@ is replaced by current TaskID. Any other occurrences of '@' will go unchanged. For example, to enable verbose gc logging to a file named for the taskid in /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc The configuration variable mapred.child.ulimit can be used to control the maximum virtual memory of the child processes.
46
mapred.child.env
User added environment variables for the task tracker child processes. Example : 1) A=foo This will set the env variable A to foo 2) B=$B:c This is inherit tasktracker's B env variable.
47
mapred.child.ulimit
The maximum virtual memory, in KB, of a process launched by the Map-Reduce framework. This can be used to control both the Mapper/Reducer tasks and applications using Hadoop Pipes, Hadoop Streaming etc. By default it is left unspecified to let cluster admins control it via limits.conf and other such relevant mechanisms. Note: mapred.child.ulimit must be greater than or equal to the -Xmx passed to JavaVM, else the VM might not start.
48
mapred.cluster.map.memory.mb
-1
The size, in terms of virtual memory, of a single map slot in the Map-Reduce framework, used by the scheduler. A job can ask for multiple slots for a single map task via mapred.job.map.memory.mb, upto the limit specified by mapred.cluster.max.map.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off.
49
mapred.cluster.reduce.memory.mb
-1
The size, in terms of virtual memory, of a single reduce slot in the Map-Reduce framework, used by the scheduler. A job can ask for multiple slots for a single reduce task via mapred.job.reduce.memory.mb, upto the limit specified by mapred.cluster.max.reduce.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off.
50
mapred.cluster.max.map.memory.mb
-1
The maximum size, in terms of virtual memory, of a single map task launched by the Map-Reduce framework, used by the scheduler. A job can ask for multiple slots for a single map task via mapred.job.map.memory.mb, upto the limit specified by mapred.cluster.max.map.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off.
51
mapred.cluster.max.reduce.memory.mb
-1
The maximum size, in terms of virtual memory, of a single reduce task launched by the Map-Reduce framework, used by the scheduler. A job can ask for multiple slots for a single reduce task via mapred.job.reduce.memory.mb, upto the limit specified by mapred.cluster.max.reduce.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off.
52
mapred.job.map.memory.mb
-1
The size, in terms of virtual memory, of a single map task for the job. A job can ask for multiple slots for a single map task, rounded up to the next multiple of mapred.cluster.map.memory.mb and upto the limit specified by mapred.cluster.max.map.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off iff mapred.cluster.map.memory.mb is also turned off (-1).
53
mapred.job.reduce.memory.mb
-1
The size, in terms of virtual memory, of a single reduce task for the job. A job can ask for multiple slots for a single map task, rounded up to the next multiple of mapred.cluster.reduce.memory.mb and upto the limit specified by mapred.cluster.max.reduce.memory.mb, if the scheduler supports the feature. The value of -1 indicates that this feature is turned off iff mapred.cluster.reduce.memory.mb is also turned off (-1).
54
mapred.child.tmp
/tmp
To set the value of tmp directory for map and reduce tasks. If the value is an absolute path, it is directly assigned. Otherwise, it is prepended with task's working directory. The java tasks are executed with option -Djava.io.tmpdir='the absolute path of the tmp dir'. Pipes and streaming are set with environment variable, TMPDIR='the absolute path of the tmp dir'
55
mapred.inmem.merge.threshold
1000
The threshold, in terms of the number of files for the in-memory merge process. When we accumulate threshold number of files we initiate the in-memory merge and spill to disk. A value of 0 or less than 0 indicates we want to DON'T have any threshold and instead depend only on the ramfs's memory consumption to trigger the merge.
56
mapred.job.shuffle.merge.percent
0.66
The usage threshold at which an in-memory merge will be initiated, expressed as a percentage of the total memory allocated to storing in-memory map outputs, as defined by mapred.job.shuffle.input.buffer.percent.
57
mapred.job.shuffle.input.buffer.percent
0.70
The percentage of memory to be allocated from the maximum heap size to storing map outputs during the shuffle.
58
mapred.job.reduce.input.buffer.percent
0.0
The percentage of memory- relative to the maximum heap size- to retain map outputs during the reduce. When the shuffle is concluded, any remaining map outputs in memory must consume less than this threshold before the reduce can begin.
59
mapred.map.tasks.speculative.execution
true
If true, then multiple instances of some map tasks may be executed in parallel.
60
mapred.reduce.tasks.speculative.execution
true
If true, then multiple instances of some reduce tasks may be executed in parallel.
61
mapred.job.reuse.jvm.num.tasks
1
How many tasks to run per jvm. If set to -1, there is no limit.
62
mapred.min.split.size
0
The minimum size chunk that map input should be split into. Note that some file formats may have minimum split sizes that take priority over this setting.
63
mapred.jobtracker.maxtasks.per.job
-1
The maximum number of tasks for a single job. A value of -1 indicates that there is no maximum.
64
mapred.submit.replication
10
The replication level for submitted job files. This should be around the square root of the number of nodes.
65
mapred.tasktracker.dns.interface
default
The name of the Network Interface from which a task tracker should report its IP address.
66
mapred.tasktracker.dns.nameserver
default
The host name or IP address of the name server (DNS) which a TaskTracker should use to determine the host name used by the JobTracker for communication and display purposes.
67
tasktracker.http.threads
40
The number of worker threads that for the http server. This is used for map output fetching
68
mapred.task.tracker.http.address
0.0.0.0:50060
The task tracker http server address and port. If the port is 0 then the server will start on a free port.
69
keep.failed.task.files
false
Should the files for failed tasks be kept. This should only be used on jobs that are failing, because the storage is never reclaimed. It also prevents the map outputs from being erased from the reduce directory as they are consumed.
70
mapred.output.compress
false
Should the job outputs be compressed?
71
mapred.output.compression.type
RECORD
If the job outputs are to compressed as SequenceFiles, how should they be compressed? Should be one of NONE, RECORD or BLOCK.
72
mapred.output.compression.codec
org.apache.hadoop.io.compress.DefaultCodec
If the job outputs are compressed, how should they be compressed?
73
mapred.compress.map.output
false
Should the outputs of the maps be compressed before being sent across the network. Uses SequenceFile compression.
74
mapred.map.output.compression.codec
org.apache.hadoop.io.compress.DefaultCodec
If the map outputs are compressed, how should they be compressed?
75
map.sort.class
org.apache.hadoop.util.QuickSort
The default sort class for sorting keys.
76
mapred.userlog.limit.kb
0
The maximum size of user-logs of each task in KB. 0 disables the cap.
77
mapred.userlog.retain.hours
24
The maximum time, in hours, for which the user-logs are to be retained after the job completion.
78
mapred.user.jobconf.limit
5242880
The maximum allowed size of the user jobconf. The default is set to 5 MB
79
mapred.hosts
Names a file that contains the list of nodes that may connect to the jobtracker. If the value is empty, all hosts are permitted.
80
mapred.hosts.exclude
Names a file that contains the list of hosts that should be excluded by the jobtracker. If the value is empty, no hosts are excluded.
81
mapred.heartbeats.in.second
100
Expert: Approximate number of heart-beats that could arrive at JobTracker in a second. Assuming each RPC can be processed in 10msec, the default value is made 100 RPCs in a second.
82
mapred.max.tracker.blacklists
4
The number of blacklists for a tasktracker by various jobs after which the tasktracker will be marked as potentially faulty and is a candidate for graylisting across all jobs. (Unlike blacklisting, this is advisory; the tracker remains active. However, it is reported as graylisted in the web UI, with the expectation that chronically graylisted trackers will be manually decommissioned.) This value is tied to mapred.jobtracker.blacklist.fault-timeout-window; faults older than the window width are forgiven, so the tracker will recover from transient problems. It will also become healthy after a restart.
83
mapred.jobtracker.blacklist.fault-timeout-window
180
The timeout (in minutes) after which per-job tasktracker faults are forgiven. The window is logically a circular buffer of time-interval buckets whose width is defined by mapred.jobtracker.blacklist.fault-bucket-width; when the "now" pointer moves across a bucket boundary, the previous contents (faults) of the new bucket are cleared. In other words, the timeout's granularity is determined by the bucket width.
84
mapred.jobtracker.blacklist.fault-bucket-width
15
The width (in minutes) of each bucket in the tasktracker fault timeout window. Each bucket is reused in a circular manner after a full timeout-window interval (defined by mapred.jobtracker.blacklist.fault-timeout-window).
85
mapred.max.tracker.failures
4
The number of task-failures on a tasktracker of a given job after which new tasks of that job aren't assigned to it.
86
jobclient.output.filter
FAILED
The filter for controlling the output of the task's userlogs sent to the console of the JobClient. The permissible options are: NONE, KILLED, FAILED, SUCCEEDED and ALL.
87
mapred.job.tracker.persist.jobstatus.active
false
Indicates if persistency of job status information is active or not.
88
mapred.job.tracker.persist.jobstatus.hours
0
The number of hours job status information is persisted in DFS. The job status information will be available after it drops of the memory queue and between jobtracker restarts. With a zero value the job status information is not persisted at all in DFS.
89
mapred.job.tracker.persist.jobstatus.dir
/jobtracker/jobsInfo
The directory where the job status information is persisted in a file system to be available after it drops of the memory queue and between jobtracker restarts.
90
mapreduce.job.complete.cancel.delegation.tokens
true
if false - do not unregister/cancel delegation tokens from renewal, because same tokens may be used by spawned jobs
91
mapred.task.profile
false
To set whether the system should collect profiler information for some of the tasks in this job? The information is stored in the user log directory. The value is "true" if task profiling is enabled.
92
mapred.task.profile.maps
0-2
To set the ranges of map tasks to profile. mapred.task.profile has to be set to true for the value to be accounted.
93
mapred.task.profile.reduces
0-2
To set the ranges of reduce tasks to profile. mapred.task.profile has to be set to true for the value to be accounted.
94
mapred.line.input.format.linespermap
1
Number of lines per split in NLineInputFormat.
95
mapred.skip.attempts.to.start.skipping
2
The number of Task attempts AFTER which skip mode will be kicked off. When skip mode is kicked off, the tasks reports the range of records which it will process next, to the TaskTracker. So that on failures, TT knows which ones are possibly the bad records. On further executions, those are skipped.
96
mapred.skip.map.auto.incr.proc.count
true
The flag which if set to true, SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented by MapRunner after invoking the map function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own.
97
mapred.skip.reduce.auto.incr.proc.count
true
The flag which if set to true, SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented by framework after invoking the reduce function. This value must be set to false for applications which process the records asynchronously or buffer the input records. For example streaming. In such cases applications should increment this counter on their own.
98
mapred.skip.out.dir
If no value is specified here, the skipped records are written to the output directory at _logs/skip. User can stop writing skipped records by giving the value "none".
99
mapred.skip.map.max.skip.records
0
The number of acceptable skip records surrounding the bad record PER bad record in mapper. The number includes the bad record as well. To turn the feature of detection/skipping of bad records off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever records(depends on application) get skipped are acceptable.
100
mapred.skip.reduce.max.skip.groups
0
The number of acceptable skip groups surrounding the bad group PER bad group in reducer. The number includes the bad group as well. To turn the feature of detection/skipping of bad groups off, set the value to 0. The framework tries to narrow down the skipped range by retrying until this threshold is met OR all attempts get exhausted for this task. Set the value to Long.MAX_VALUE to indicate that framework need not try to narrow down. Whatever groups(depends on application) get skipped are acceptable.
101
job.end.retry.attempts
0
Indicates how many times hadoop should attempt to contact the notification URL
102
job.end.retry.interval
30000
Indicates time in milliseconds between notification URL retry calls
103
hadoop.rpc.socket.factory.class.JobSubmissionProtocol
SocketFactory to use to connect to a Map/Reduce master (JobTracker). If null or empty, then use hadoop.rpc.socket.class.default.
104
mapred.task.cache.levels
2
This is the max level of the task cache. For example, if the level is 2, the tasks cached are at the host level and at the rack level
105
mapred.queue.names
default
Comma separated list of queues configured for this jobtracker. Jobs are added to queues and schedulers can configure different scheduling properties for the various queues. To configure a property for a queue, the name of the queue must match the name specified in this value. Queue properties that are common to all schedulers are configured here with the naming convention, mapred.queue.$QUEUE-NAME.$PROPERTY-NAME, for e.g. mapred.queue.default.submit-job-acl. The number of queues configured in this parameter could depend on the type of scheduler being used, as specified in mapred.jobtracker.taskScheduler. For example, the JobQueueTaskScheduler supports only a single queue, which is the default configured here. Before adding more queues, ensure that the scheduler you've configured supports multiple queues.
106
mapred.acls.enabled
false
Specifies whether ACLs should be checked for authorization of users for doing various queue and job level operations. ACLs are disabled by default. If enabled, access control checks are made by JobTracker and TaskTracker when requests are made by users for queue operations like submit job to a queue and kill a job in the queue and job operations like viewing the job-details (See mapreduce.job.acl-view-job) or for modifying the job (See mapreduce.job.acl-modify-job) using Map/Reduce APIs, RPCs or via the console and web user interfaces.
107
mapred.queue.default.state
RUNNING
This values defines the state , default queue is in. the values can be either "STOPPED" or "RUNNING" This value can be changed at runtime.
108
mapred.job.queue.name
default
Queue to which a job is submitted. This must match one of the queues defined in mapred.queue.names for the system. Also, the ACL setup for the queue must allow the current user to submit a job to the queue. Before specifying a queue, ensure that the system is configured with the queue, and access is allowed for submitting jobs to the queue.
109
mapreduce.job.acl-modify-job
Job specific access-control list for 'modifying' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapred.acls.enabled to true. This specifies the list of users and/or groups who can do modification operations on the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard all the modifications with respect to this job and takes care of all the following operations: o killing this job o killing a task of this job, failing a task of this job o setting the priority of this job Each of these operations are also protected by the per-queue level ACL "acl-administer-jobs" configured via mapred-queues.xml. So a caller should have the authorization to satisfy either the queue-level ACL or the job-level ACL. Irrespective of this ACL configuration, job-owner, the user who started the cluster, cluster administrators configured via mapreduce.cluster.administrators and queue administrators of the queue to which this job is submitted to configured via mapred.queue.queue-name.acl-administer-jobs in mapred-queue-acls.xml can do all the modification operations on a job. By default, nobody else besides job-owner, the user who started the cluster, cluster administrators and queue administrators can perform modification operations on a job.
110
mapreduce.job.acl-view-job
Job specific access-control list for 'viewing' the job. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapred.acls.enabled to true. This specifies the list of users and/or groups who can view private details about the job. For specifying a list of users and groups the format to use is "user1,user2 group1,group". If set to '*', it allows all users/groups to modify this job. If set to ' '(i.e. space), it allows none. This configuration is used to guard some of the job-views and at present only protects APIs that can return possibly sensitive information of the job-owner like o job-level counters o task-level counters o tasks' diagnostic information o task-logs displayed on the TaskTracker web-UI and o job.xml showed by the JobTracker's web-UI Every other piece of information of jobs is still accessible by any other user, for e.g., JobStatus, JobProfile, list of jobs in the queue, etc. Irrespective of this ACL configuration, job-owner, the user who started the cluster, cluster administrators configured via mapreduce.cluster.administrators and queue administrators of the queue to which this job is submitted to configured via mapred.queue.queue-name.acl-administer-jobs in mapred-queue-acls.xml can do all the view operations on a job. By default, nobody else besides job-owner, the user who started the cluster, cluster administrators and queue administrators can perform view operations on a job.
111
mapred.tasktracker.indexcache.mb
10
The maximum memory that a task tracker allows for the index cache that is used when serving map outputs to reducers.
112
mapred.combine.recordsBeforeProgress
10000
The number of records to process during combine output collection before sending a progress notification to the TaskTracker.
113
mapred.merge.recordsBeforeProgress
10000
The number of records to process during merge before sending a progress notification to the TaskTracker.
114
mapred.reduce.slowstart.completed.maps
0.05
Fraction of the number of maps in the job which should be complete before reduces are scheduled for the job.
115
mapred.task.tracker.task-controller
org.apache.hadoop.mapred.DefaultTaskController
TaskController which is used to launch and manage task execution
116
mapreduce.tasktracker.group
Expert: Group to which TaskTracker belongs. If LinuxTaskController is configured via mapreduce.tasktracker.taskcontroller, the group owner of the task-controller binary should be same as this group.
117
mapred.healthChecker.script.path
Absolute path to the script which is periodicallyrun by the node health monitoring service to determine if the node is healthy or not. If the value of this key is empty or the file does not exist in the location configured here, the node health monitoring service is not started.
118
mapred.healthChecker.interval
60000
Frequency of the node health script to be run, in milliseconds
119
mapred.healthChecker.script.timeout
600000
Time after node health script should be killed if unresponsive and considered that the script has failed.
120
mapred.healthChecker.script.args
List of arguments which are to be passed to node health script when it is being launched comma seperated.
121
mapreduce.job.counters.limit
120
Limit on the number of counters allowed per job.