|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use User | |
---|---|
org.apache.hadoop.hbase | |
org.apache.hadoop.hbase.client | Provides HBase Client |
org.apache.hadoop.hbase.ipc | Tools to help define network clients and servers. |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.regionserver.compactions | |
org.apache.hadoop.hbase.security | |
org.apache.hadoop.hbase.security.access | |
org.apache.hadoop.hbase.security.token | |
org.apache.hadoop.hbase.security.visibility |
Uses of User in org.apache.hadoop.hbase |
---|
Methods in org.apache.hadoop.hbase with parameters of type User | |
---|---|
JVMClusterUtil.MasterThread |
LocalHBaseCluster.addMaster(org.apache.hadoop.conf.Configuration c,
int index,
User user)
|
JVMClusterUtil.RegionServerThread |
LocalHBaseCluster.addRegionServer(org.apache.hadoop.conf.Configuration config,
int index,
User user)
|
void |
HTableDescriptor.setOwner(User owner)
Deprecated. |
Uses of User in org.apache.hadoop.hbase.client |
---|
Fields in org.apache.hadoop.hbase.client declared as User | |
---|---|
protected User |
HConnectionManager.HConnectionImplementation.user
|
Methods in org.apache.hadoop.hbase.client with parameters of type User | |
---|---|
static HConnection |
HConnectionManager.createConnection(org.apache.hadoop.conf.Configuration conf,
ExecutorService pool,
User user)
Create a new HConnection instance using the passed conf instance. |
static HConnection |
HConnectionManager.createConnection(org.apache.hadoop.conf.Configuration conf,
User user)
Create a new HConnection instance using the passed conf instance. |
Uses of User in org.apache.hadoop.hbase.ipc |
---|
Fields in org.apache.hadoop.hbase.ipc declared as User | |
---|---|
protected User |
RpcServer.Connection.user
|
Methods in org.apache.hadoop.hbase.ipc that return User | |
---|---|
static User |
RpcServer.getRequestUser()
Returns the user credentials associated with the current RPC request or null if no credentials were provided. |
User |
RpcCallContext.getRequestUser()
Returns the user credentials associated with the current RPC request or null if no credentials were provided. |
Methods in org.apache.hadoop.hbase.ipc with parameters of type User | |
---|---|
com.google.protobuf.BlockingRpcChannel |
RpcClient.createBlockingRpcChannel(ServerName sn,
User ticket,
int rpcTimeout)
Creates a "channel" that can be used by a blocking protobuf service. |
protected RpcClient.Connection |
RpcClient.getConnection(User ticket,
RpcClient.Call call,
InetSocketAddress addr,
int rpcTimeout,
Codec codec,
org.apache.hadoop.io.compress.CompressionCodec compressor)
|
Constructors in org.apache.hadoop.hbase.ipc with parameters of type User | |
---|---|
RpcClient.BlockingRpcChannelImplementation(RpcClient rpcClient,
ServerName sn,
User ticket,
int rpcTimeout)
|
Uses of User in org.apache.hadoop.hbase.regionserver |
---|
Methods in org.apache.hadoop.hbase.regionserver with parameters of type User | |
---|---|
List<StoreFile> |
HStore.compact(CompactionContext compaction,
CompactionThroughputController throughputController,
User user)
|
List<StoreFile> |
Store.compact(CompactionContext compaction,
CompactionThroughputController throughputController,
User user)
|
boolean |
HRegion.compact(CompactionContext compaction,
Store store,
CompactionThroughputController throughputController,
User user)
|
PairOfSameType<HRegion> |
SplitTransaction.execute(Server server,
RegionServerServices services,
User user)
Run the transaction. |
HRegion |
RegionMergeTransaction.execute(Server server,
RegionServerServices services,
User user)
|
CompactionRequest |
CompactSplitThread.requestCompaction(HRegion r,
Store s,
String why,
int priority,
CompactionRequest request,
User user)
|
CompactionRequest |
CompactionRequestor.requestCompaction(HRegion r,
Store s,
String why,
int pri,
CompactionRequest request,
User user)
|
List<CompactionRequest> |
CompactSplitThread.requestCompaction(HRegion r,
String why,
int p,
List<Pair<CompactionRequest,Store>> requests,
User user)
|
List<CompactionRequest> |
CompactionRequestor.requestCompaction(HRegion r,
String why,
int pri,
List<Pair<CompactionRequest,Store>> requests,
User user)
|
CompactionContext |
HStore.requestCompaction(int priority,
CompactionRequest baseRequest,
User user)
|
CompactionContext |
Store.requestCompaction(int priority,
CompactionRequest baseRequest,
User user)
|
void |
CompactSplitThread.requestRegionsMerge(HRegion a,
HRegion b,
boolean forcible,
long masterSystemTime,
User user)
|
void |
CompactSplitThread.requestSplit(HRegion r,
byte[] midKey,
User user)
|
boolean |
SplitTransaction.rollback(Server server,
RegionServerServices services,
User user)
|
boolean |
RegionMergeTransaction.rollback(Server server,
RegionServerServices services,
User user)
|
void |
RegionMergeTransaction.stepsAfterPONR(Server server,
RegionServerServices services,
HRegion mergedRegion,
User user)
|
PairOfSameType<HRegion> |
SplitTransaction.stepsAfterPONR(Server server,
RegionServerServices services,
PairOfSameType<HRegion> regions,
User user)
|
Uses of User in org.apache.hadoop.hbase.regionserver.compactions |
---|
Methods in org.apache.hadoop.hbase.regionserver.compactions with parameters of type User | |
---|---|
List<org.apache.hadoop.fs.Path> |
DefaultCompactor.compact(CompactionRequest request,
CompactionThroughputController throughputController,
User user)
Do a minor/major compaction on an explicit set of storefiles from a Store. |
List<org.apache.hadoop.fs.Path> |
StripeCompactor.compact(CompactionRequest request,
int targetCount,
long targetSize,
byte[] left,
byte[] right,
byte[] majorRangeFromRow,
byte[] majorRangeToRow,
CompactionThroughputController throughputController,
User user)
|
List<org.apache.hadoop.fs.Path> |
StripeCompactor.compact(CompactionRequest request,
List<byte[]> targetBoundaries,
byte[] majorRangeFromRow,
byte[] majorRangeToRow,
CompactionThroughputController throughputController,
User user)
|
abstract List<org.apache.hadoop.fs.Path> |
CompactionContext.compact(CompactionThroughputController throughputController,
User user)
|
abstract List<org.apache.hadoop.fs.Path> |
StripeCompactionPolicy.StripeCompactionRequest.execute(StripeCompactor compactor,
CompactionThroughputController throughputController,
User user)
Executes the request against compactor (essentially, just calls correct overload of compact method), to simulate more dynamic dispatch. |
protected InternalScanner |
Compactor.postCreateCoprocScanner(CompactionRequest request,
ScanType scanType,
InternalScanner scanner,
User user)
Calls coprocessor, if any, to create scanners - after normal scanner creation. |
protected InternalScanner |
Compactor.preCreateCoprocScanner(CompactionRequest request,
ScanType scanType,
long earliestPutTs,
List<StoreFileScanner> scanners,
User user)
|
Uses of User in org.apache.hadoop.hbase.security |
---|
Subclasses of User in org.apache.hadoop.hbase.security | |
---|---|
static class |
User.SecureHadoopUser
Bridges User invocations to underlying calls to
UserGroupInformation for secure Hadoop
0.20 and versions 0.21 and above. |
Methods in org.apache.hadoop.hbase.security that return User | |
---|---|
static User |
User.create(org.apache.hadoop.security.UserGroupInformation ugi)
Wraps an underlying UserGroupInformation instance. |
User |
UserProvider.create(org.apache.hadoop.security.UserGroupInformation ugi)
Wraps an underlying UserGroupInformation instance. |
static User |
User.createUserForTesting(org.apache.hadoop.conf.Configuration conf,
String name,
String[] groups)
Generates a new User instance specifically for use in test code. |
static User |
User.SecureHadoopUser.createUserForTesting(org.apache.hadoop.conf.Configuration conf,
String name,
String[] groups)
|
static User |
User.getCurrent()
Returns the User instance within current execution context. |
User |
UserProvider.getCurrent()
|
Methods in org.apache.hadoop.hbase.security with parameters of type User | |
---|---|
static boolean |
Superusers.isSuperUser(User user)
|
Uses of User in org.apache.hadoop.hbase.security.access |
---|
Methods in org.apache.hadoop.hbase.security.access that return User | |
---|---|
User |
AuthResult.getUser()
|
Methods in org.apache.hadoop.hbase.security.access with parameters of type User | |
---|---|
static AuthResult |
AuthResult.allow(String request,
String reason,
User user,
Permission.Action action,
String namespace)
|
static AuthResult |
AuthResult.allow(String request,
String reason,
User user,
Permission.Action action,
TableName table,
byte[] family,
byte[] qualifier)
|
static AuthResult |
AuthResult.allow(String request,
String reason,
User user,
Permission.Action action,
TableName table,
Map<byte[],? extends Collection<?>> families)
|
boolean |
TableAuthManager.authorize(User user,
Permission.Action action)
Authorize a global permission based on ACLs for the given user and the user's groups. |
boolean |
TableAuthManager.authorize(User user,
String namespace,
Permission.Action action)
|
boolean |
TableAuthManager.authorize(User user,
TableName table,
byte[] family,
byte[] qualifier,
Permission.Action action)
|
boolean |
TableAuthManager.authorize(User user,
TableName table,
byte[] family,
Permission.Action action)
|
boolean |
TableAuthManager.authorize(User user,
TableName table,
Cell cell,
Permission.Action action)
Authorize a user for a given KV. |
boolean |
TableAuthManager.authorizeUser(User user,
TableName table,
byte[] family,
byte[] qualifier,
Permission.Action action)
|
boolean |
TableAuthManager.authorizeUser(User user,
TableName table,
byte[] family,
Permission.Action action)
Checks authorization to a given table and column family for a user, based on the stored user permissions. |
static AuthResult |
AuthResult.deny(String request,
String reason,
User user,
Permission.Action action,
String namespace)
|
static AuthResult |
AuthResult.deny(String request,
String reason,
User user,
Permission.Action action,
TableName table,
byte[] family,
byte[] qualifier)
|
static AuthResult |
AuthResult.deny(String request,
String reason,
User user,
Permission.Action action,
TableName table,
Map<byte[],? extends Collection<?>> families)
|
static List<Permission> |
AccessControlLists.getCellPermissionsForUser(User user,
Cell cell)
|
boolean |
TableAuthManager.hasAccess(User user,
TableName table,
Permission.Action action)
|
boolean |
TableAuthManager.matchPermission(User user,
TableName table,
byte[] family,
byte[] qualifier,
Permission.Action action)
|
boolean |
TableAuthManager.matchPermission(User user,
TableName table,
byte[] family,
Permission.Action action)
Returns true if the given user has a TablePermission matching up
to the column family portion of a permission. |
boolean |
TableAuthManager.userHasAccess(User user,
TableName table,
Permission.Action action)
Checks if the user has access to the full table or at least a family/qualifier for the specified action. |
Constructors in org.apache.hadoop.hbase.security.access with parameters of type User | |
---|---|
AuthResult(boolean allowed,
String request,
String reason,
User user,
Permission.Action action,
String namespace)
|
|
AuthResult(boolean allowed,
String request,
String reason,
User user,
Permission.Action action,
TableName table,
byte[] family,
byte[] qualifier)
|
|
AuthResult(boolean allowed,
String request,
String reason,
User user,
Permission.Action action,
TableName table,
Map<byte[],? extends Collection<?>> families)
|
Uses of User in org.apache.hadoop.hbase.security.token |
---|
Methods in org.apache.hadoop.hbase.security.token with parameters of type User | |
---|---|
static void |
TokenUtil.addTokenForJob(HConnection conn,
org.apache.hadoop.mapred.JobConf job,
User user)
Checks for an authentication token for the given user, obtaining a new token if necessary, and adds it to the credentials for the given map reduce job. |
static void |
TokenUtil.addTokenForJob(HConnection conn,
User user,
org.apache.hadoop.mapreduce.Job job)
Checks for an authentication token for the given user, obtaining a new token if necessary, and adds it to the credentials for the given map reduce job. |
static boolean |
TokenUtil.addTokenIfMissing(HConnection conn,
User user)
Checks if an authentication tokens exists for the connected cluster, obtaining one if needed and adding it to the user's credentials. |
static void |
TokenUtil.obtainAndCacheToken(HConnection conn,
User user)
Obtain an authentication token for the given user and add it to the user's credentials. |
static org.apache.hadoop.security.token.Token<AuthenticationTokenIdentifier> |
TokenUtil.obtainToken(HConnection conn,
User user)
Obtain and return an authentication token for the current user. |
static void |
TokenUtil.obtainTokenForJob(HConnection conn,
org.apache.hadoop.mapred.JobConf job,
User user)
Obtain an authentication token on behalf of the given user and add it to the credentials for the given map reduce job. |
static void |
TokenUtil.obtainTokenForJob(HConnection conn,
User user,
org.apache.hadoop.mapreduce.Job job)
Obtain an authentication token on behalf of the given user and add it to the credentials for the given map reduce job. |
Uses of User in org.apache.hadoop.hbase.security.visibility |
---|
Methods in org.apache.hadoop.hbase.security.visibility that return User | |
---|---|
static User |
VisibilityUtils.getActiveUser()
|
Methods in org.apache.hadoop.hbase.security.visibility with parameters of type User | |
---|---|
List<String> |
EnforcingScanLabelGenerator.getLabels(User user,
Authorizations authorizations)
|
List<String> |
ScanLabelGenerator.getLabels(User user,
Authorizations authorizations)
Helps to get a list of lables associated with an UGI |
List<String> |
SimpleScanLabelGenerator.getLabels(User user,
Authorizations authorizations)
|
List<String> |
DefinedSetFilterScanLabelGenerator.getLabels(User user,
Authorizations authorizations)
|
List<String> |
FeedUserAuthScanLabelGenerator.getLabels(User user,
Authorizations authorizations)
|
boolean |
DefaultVisibilityLabelServiceImpl.havingSystemAuth(User user)
|
boolean |
VisibilityLabelService.havingSystemAuth(User user)
System checks for user auth during admin operations. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |