namevaluedescription
hadoop.common.configuration.version0.22.0version of this configuration file
hadoop.tmp.dir/tmp/hadoop-${user.name}A base for other temporary directories.
io.native.lib.availabletrueShould native hadoop libraries, if present, be used.
hadoop.http.filter.initializersA comma separated list of class names. Each class in the list must extend org.apache.hadoop.http.FilterInitializer. The corresponding Filter will be initialized. Then, the Filter will be applied to all user facing jsp and servlet web pages. The ordering of the list defines the ordering of the filters.
hadoop.security.authorizationfalseIs service-level authorization enabled?
hadoop.security.authenticationsimplePossible values are simple (no authentication), and kerberos
hadoop.security.group.mappingorg.apache.hadoop.security.ShellBasedUnixGroupsMapping Class for user to group mapping (get groups for a given user) for ACL
hadoop.security.groups.cache.secs300 This is the config controlling the validity of the entries in the cache containing the user->group mapping. When this duration has expired, then the implementation of the group mapping provider is invoked to get the groups of the user and then cached back.
hadoop.security.service.user.name.key For those cases where the same RPC protocol is implemented by multiple servers, this configuration is required for specifying the principal name to use for the service when the client wishes to make an RPC call.
hadoop.rpc.protectionauthenticationThis field sets the quality of protection for secured sasl connections. Possible values are authentication, integrity and privacy. authentication means authentication only and no integrity or privacy; integrity implies authentication and integrity are enabled; and privacy implies all of authentication, integrity and privacy are enabled.
hadoop.work.around.non.threadsafe.getpwuidfalseSome operating systems or authentication modules are known to have broken implementations of getpwuid_r and getpwgid_r, such that these calls are not thread-safe. Symptoms of this problem include JVM crashes with a stack trace inside these functions. If your system exhibits this issue, enable this configuration parameter to include a lock around the calls as a workaround. An incomplete list of some systems known to have this issue is available at http://wiki.apache.org/hadoop/KnownBrokenPwuidImplementations
hadoop.kerberos.kinit.commandkinitUsed to periodically renew Kerberos credentials when provided to Hadoop. The default setting assumes that kinit is in the PATH of users running the Hadoop client. Change this to the absolute path to kinit if this is not the case.
hadoop.logfile.size10000000The max size of each log file
hadoop.logfile.count10The max number of log files
io.file.buffer.size4096The size of buffer for use in sequence files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.
io.bytes.per.checksum512The number of bytes per checksum. Must not be larger than io.file.buffer.size.
io.skip.checksum.errorsfalseIf true, when a checksum error is encountered while reading a sequence file, entries are skipped, instead of throwing an exception.
io.compression.codecsorg.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2CodecA list of the compression codec classes that can be used for compression/decompression.
io.serializationsorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerializationA list of serialization classes that can be used for obtaining serializers and deserializers.
io.seqfile.local.dir${hadoop.tmp.dir}/io/localThe local directory where sequence file stores intermediate data files during merge. May be a comma-separated list of directories on different devices in order to spread disk i/o. Directories that do not exist are ignored.
io.map.index.skip0Number of index entries to skip between each entry. Zero by default. Setting this to values larger than zero can facilitate opening large MapFiles using less memory.
io.map.index.interval128 MapFile consist of two files - data file (tuples) and index file (keys). For every io.map.index.interval records written in the data file, an entry (record-key, data-file-position) is written in the index file. This is to allow for doing binary search later within the index file to look up records by their keys and get their closest positions in the data file.
fs.defaultFSfile:///The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.
fs.trash.interval0Number of minutes after which the checkpoint gets deleted. If zero, the trash feature is disabled.
fs.trash.checkpoint.interval0Number of minutes between trash checkpoints. Should be smaller or equal to fs.trash.interval. Every time the checkpointer runs it creates a new checkpoint out of current and removes checkpoints created more than fs.trash.interval minutes ago.
fs.file.implorg.apache.hadoop.fs.LocalFileSystemThe FileSystem for file: uris.
fs.hdfs.implorg.apache.hadoop.hdfs.DistributedFileSystemThe FileSystem for hdfs: uris.
fs.AbstractFileSystem.file.implorg.apache.hadoop.fs.local.LocalFsThe AbstractFileSystem for file: uris.
fs.AbstractFileSystem.hdfs.implorg.apache.hadoop.fs.HdfsThe FileSystem for hdfs: uris.
fs.s3.implorg.apache.hadoop.fs.s3.S3FileSystemThe FileSystem for s3: uris.
fs.s3n.implorg.apache.hadoop.fs.s3native.NativeS3FileSystemThe FileSystem for s3n: (Native S3) uris.
fs.kfs.implorg.apache.hadoop.fs.kfs.KosmosFileSystemThe FileSystem for kfs: uris.
fs.hftp.implorg.apache.hadoop.hdfs.HftpFileSystem
fs.hsftp.implorg.apache.hadoop.hdfs.HsftpFileSystem
fs.ftp.implorg.apache.hadoop.fs.ftp.FTPFileSystemThe FileSystem for ftp: uris.
fs.ftp.host0.0.0.0FTP filesystem connects to this server
fs.ftp.host.port21 FTP filesystem connects to fs.ftp.host on this port
fs.har.implorg.apache.hadoop.fs.HarFileSystemThe filesystem for Hadoop archives.
fs.har.impl.disable.cachetrueDon't cache 'har' filesystem instances.
fs.df.interval60000Disk usage statistics refresh interval in msec.
fs.s3.block.size67108864Block size to use when writing files to S3.
fs.s3.buffer.dir${hadoop.tmp.dir}/s3Determines where on the local filesystem the S3 filesystem should store files before sending them to S3 (or after retrieving them from S3).
fs.s3.maxRetries4The maximum number of retries for reading or writing files to S3, before we signal failure to the application.
fs.s3.sleepTimeSeconds10The number of seconds to sleep between each S3 retry.
fs.automatic.closetrueBy default, FileSystem instances are automatically closed at program exit using a JVM shutdown hook. Setting this property to false disables this behavior. This is an advanced option that should only be used by server applications requiring a more carefully orchestrated shutdown sequence.
fs.s3n.block.size67108864Block size to use when reading files using the native S3 filesystem (s3n: URIs).
io.seqfile.compress.blocksize1000000The minimum block size for compression in block compressed SequenceFiles.
io.seqfile.lazydecompresstrueShould values of block-compressed SequenceFiles be decompressed only when necessary.
io.seqfile.sorter.recordlimit1000000The limit on number of records to be kept in memory in a spill in SequenceFiles.Sorter
io.mapfile.bloom.size1048576The size of BloomFilter-s used in BloomMapFile. Each time this many keys is appended the next BloomFilter will be created (inside a DynamicBloomFilter). Larger values minimize the number of filters, which slightly increases the performance, but may waste too much space if the total number of keys is usually much smaller than this number.
io.mapfile.bloom.error.rate0.005The rate of false positives in BloomFilter-s used in BloomMapFile. As this value decreases, the size of BloomFilter-s increases exponentially. This value is the probability of encountering false positives (default is 0.5%).
hadoop.util.hash.typemurmurThe default implementation of Hash. Currently this can take one of the two values: 'murmur' to select MurmurHash and 'jenkins' to select JenkinsHash.
ipc.client.idlethreshold4000Defines the threshold number of connections after which connections will be inspected for idleness.
ipc.client.kill.max10Defines the maximum number of clients to disconnect in one go.
ipc.client.connection.maxidletime10000The maximum time in msec after which a client will bring down the connection to the server.
ipc.client.connect.max.retries10Indicates the number of retries a client will make to establish a server connection.
ipc.server.listen.queue.size128Indicates the length of the listen queue for servers accepting client connections.
ipc.server.tcpnodelayfalseTurn on/off Nagle's algorithm for the TCP socket connection on the server. Setting to true disables the algorithm and may decrease latency with a cost of more/smaller packets.
ipc.client.tcpnodelayfalseTurn on/off Nagle's algorithm for the TCP socket connection on the client. Setting to true disables the algorithm and may decrease latency with a cost of more/smaller packets.
hadoop.rpc.socket.factory.class.defaultorg.apache.hadoop.net.StandardSocketFactory Default SocketFactory to use. This parameter is expected to be formatted as "package.FactoryClassName".
hadoop.rpc.socket.factory.class.ClientProtocol SocketFactory to use to connect to a DFS. If null or empty, use hadoop.rpc.socket.class.default. This socket factory is also used by DFSClient to create sockets to DataNodes.
hadoop.socks.server Address (host:port) of the SOCKS server to be used by the SocksSocketFactory.
net.topology.node.switch.mapping.implorg.apache.hadoop.net.ScriptBasedMapping The default implementation of the DNSToSwitchMapping. It invokes a script specified in topology.script.file.name to resolve node names. If the value for topology.script.file.name is not set, the default value of DEFAULT_RACK is returned for all node names.
net.topology.script.file.name The script name that should be invoked to resolve DNS names to NetworkTopology names. Example: the script would take host.foo.bar as an argument, and return /rack1 as the output.
net.topology.script.number.args100 The max number of args that the script configured with topology.script.file.name should be run with. Each arg is an IP address.
file.stream-buffer-size4096The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.
file.bytes-per-checksum512The number of bytes per checksum. Must not be larger than file.stream-buffer-size
file.client-write-packet-size65536Packet size for clients to write
file.blocksize67108864Block size
file.replication1Replication factor
s3.stream-buffer-size4096The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.
s3.bytes-per-checksum512The number of bytes per checksum. Must not be larger than s3.stream-buffer-size
s3.client-write-packet-size65536Packet size for clients to write
s3.blocksize67108864Block size
s3.replication3Replication factor
s3native.stream-buffer-size4096The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.
s3native.bytes-per-checksum512The number of bytes per checksum. Must not be larger than s3native.stream-buffer-size
s3native.client-write-packet-size65536Packet size for clients to write
s3native.blocksize67108864Block size
s3native.replication3Replication factor
kfs.stream-buffer-size4096The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.
kfs.bytes-per-checksum512The number of bytes per checksum. Must not be larger than kfs.stream-buffer-size
kfs.client-write-packet-size65536Packet size for clients to write
kfs.blocksize67108864Block size
kfs.replication3Replication factor
ftp.stream-buffer-size4096The size of buffer to stream files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.
ftp.bytes-per-checksum512The number of bytes per checksum. Must not be larger than ftp.stream-buffer-size
ftp.client-write-packet-size65536Packet size for clients to write
ftp.blocksize67108864Block size
ftp.replication3Replication factor
tfile.io.chunk.size1048576 Value chunk size in bytes. Default to 1MB. Values of the length less than the chunk size is guaranteed to have known value length in read time (See also TFile.Reader.Scanner.Entry.isValueLengthKnown()).
tfile.fs.output.buffer.size262144 Buffer size used for FSDataOutputStream in bytes.
tfile.fs.input.buffer.size262144 Buffer size used for FSDataInputStream in bytes.