public class FederationInterceptorREST extends AbstractRESTRequestInterceptor
AbstractRESTRequestInterceptor
class and provides an
implementation for federation of YARN RM and scaling an application across
multiple YARN SubClusters. All the federation specific implementation is
encapsulated in this class. This is always the last intercepter in the chain.Constructor and Description |
---|
FederationInterceptorREST() |
Modifier and Type | Method and Description |
---|---|
javax.ws.rs.core.Response |
addToClusterNodeLabels(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo newNodeLabels,
javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
cancelDelegationToken(javax.servlet.http.HttpServletRequest hsr) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.RMQueueAclInfo |
checkUserAccessToQueue(String queue,
String username,
String queueAclType,
javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
createNewApplication(javax.servlet.http.HttpServletRequest hsr)
YARN Router forwards every getNewApplication requests to any RM.
|
javax.ws.rs.core.Response |
createNewReservation(javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
deleteReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDeleteRequestInfo resContext,
javax.servlet.http.HttpServletRequest hsr) |
String |
dumpSchedulerLogs(String time,
javax.servlet.http.HttpServletRequest hsr) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterInfo |
get() |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ActivitiesInfo |
getActivities(javax.servlet.http.HttpServletRequest hsr,
String nodeId,
String groupBy) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppInfo |
getApp(javax.servlet.http.HttpServletRequest hsr,
String appId,
Set<String> unselectedFields)
The YARN Router will forward to the respective YARN RM in which the AM is
running.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppActivitiesInfo |
getAppActivities(javax.servlet.http.HttpServletRequest hsr,
String appId,
String time,
Set<String> requestPriorities,
Set<String> allocationRequestIds,
String groupBy,
String limit,
Set<String> actions,
boolean summarize) |
org.apache.hadoop.yarn.server.webapp.dao.AppAttemptInfo |
getAppAttempt(javax.servlet.http.HttpServletRequest req,
javax.servlet.http.HttpServletResponse res,
String appId,
String appAttemptId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppAttemptsInfo |
getAppAttempts(javax.servlet.http.HttpServletRequest hsr,
String appId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppPriority |
getAppPriority(javax.servlet.http.HttpServletRequest hsr,
String appId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppQueue |
getAppQueue(javax.servlet.http.HttpServletRequest hsr,
String appId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppsInfo |
getApps(javax.servlet.http.HttpServletRequest hsr,
String stateQuery,
Set<String> statesQuery,
String finalStatusQuery,
String userQuery,
String queueQuery,
String count,
String startedBegin,
String startedEnd,
String finishBegin,
String finishEnd,
Set<String> applicationTypes,
Set<String> applicationTags,
String name,
Set<String> unselectedFields)
The YARN Router will forward the request to all the YARN RMs in parallel,
after that it will group all the ApplicationReports by the ApplicationId.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppState |
getAppState(javax.servlet.http.HttpServletRequest hsr,
String appId)
The YARN Router will forward to the respective YARN RM in which the AM is
running.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ApplicationStatisticsInfo |
getAppStatistics(javax.servlet.http.HttpServletRequest hsr,
Set<String> stateQueries,
Set<String> typeQueries) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutInfo |
getAppTimeout(javax.servlet.http.HttpServletRequest hsr,
String appId,
String type) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutsInfo |
getAppTimeouts(javax.servlet.http.HttpServletRequest hsr,
String appId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterInfo |
getClusterInfo() |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo |
getClusterMetricsInfo() |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo |
getClusterNodeLabels(javax.servlet.http.HttpServletRequest hsr) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterUserInfo |
getClusterUserInfo(javax.servlet.http.HttpServletRequest hsr) |
org.apache.hadoop.yarn.server.webapp.dao.ContainerInfo |
getContainer(javax.servlet.http.HttpServletRequest req,
javax.servlet.http.HttpServletResponse res,
String appId,
String appAttemptId,
String containerId) |
org.apache.hadoop.yarn.server.webapp.dao.ContainersInfo |
getContainers(javax.servlet.http.HttpServletRequest req,
javax.servlet.http.HttpServletResponse res,
String appId,
String appAttemptId) |
protected DefaultRequestInterceptorREST |
getInterceptorForSubCluster(org.apache.hadoop.yarn.server.federation.store.records.SubClusterId subClusterId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo |
getLabelsOnNode(javax.servlet.http.HttpServletRequest hsr,
String nodeId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.LabelsToNodesInfo |
getLabelsToNodes(Set<String> labels) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeInfo |
getNode(String nodeId)
The YARN Router will forward to the request to all the SubClusters to find
where the node is running.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodesInfo |
getNodes(String states)
The YARN Router will forward the request to all the YARN RMs in parallel,
after that it will remove all the duplicated NodeInfo by using the NodeId.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeToLabelsInfo |
getNodeToLabels(javax.servlet.http.HttpServletRequest hsr) |
protected DefaultRequestInterceptorREST |
getOrCreateInterceptorForSubCluster(org.apache.hadoop.yarn.server.federation.store.records.SubClusterId subClusterId,
String webAppAddress) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo |
getSchedulerInfo() |
void |
init(String user)
Initializes the
RESTRequestInterceptor . |
javax.ws.rs.core.Response |
listReservation(String queue,
String reservationId,
long startTime,
long endTime,
boolean includeResourceAllocations,
javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
postDelegationToken(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.DelegationToken tokenData,
javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
postDelegationTokenExpiration(javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
removeFromCluserNodeLabels(Set<String> oldNodeLabels,
javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
replaceLabelsOnNode(Set<String> newNodeLabelsName,
javax.servlet.http.HttpServletRequest hsr,
String nodeId) |
javax.ws.rs.core.Response |
replaceLabelsOnNodes(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeToLabelsEntryList newNodeToLabels,
javax.servlet.http.HttpServletRequest hsr) |
void |
setNextInterceptor(RESTRequestInterceptor next)
Sets the
RESTRequestInterceptor in the chain. |
void |
shutdown()
Disposes the
RESTRequestInterceptor . |
javax.ws.rs.core.Response |
signalToContainer(String containerId,
String command,
javax.servlet.http.HttpServletRequest req) |
javax.ws.rs.core.Response |
submitApplication(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ApplicationSubmissionContextInfo newApp,
javax.servlet.http.HttpServletRequest hsr)
Today, in YARN there are no checks of any applicationId submitted.
|
javax.ws.rs.core.Response |
submitReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationSubmissionRequestInfo resContext,
javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
updateApplicationPriority(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppPriority targetPriority,
javax.servlet.http.HttpServletRequest hsr,
String appId) |
javax.ws.rs.core.Response |
updateApplicationTimeout(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutInfo appTimeout,
javax.servlet.http.HttpServletRequest hsr,
String appId) |
javax.ws.rs.core.Response |
updateAppQueue(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppQueue targetQueue,
javax.servlet.http.HttpServletRequest hsr,
String appId) |
javax.ws.rs.core.Response |
updateAppState(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppState targetState,
javax.servlet.http.HttpServletRequest hsr,
String appId)
The YARN Router will forward to the respective YARN RM in which the AM is
running.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo |
updateNodeResource(javax.servlet.http.HttpServletRequest hsr,
String nodeId,
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceOptionInfo resourceOption) |
javax.ws.rs.core.Response |
updateReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationUpdateRequestInfo resContext,
javax.servlet.http.HttpServletRequest hsr) |
getConf, getNextInterceptor, setConf
public void init(String user)
AbstractRESTRequestInterceptor
RESTRequestInterceptor
.init
in interface RESTRequestInterceptor
init
in class AbstractRESTRequestInterceptor
user
- the name of the clientprotected DefaultRequestInterceptorREST getInterceptorForSubCluster(org.apache.hadoop.yarn.server.federation.store.records.SubClusterId subClusterId)
protected DefaultRequestInterceptorREST getOrCreateInterceptorForSubCluster(org.apache.hadoop.yarn.server.federation.store.records.SubClusterId subClusterId, String webAppAddress)
public javax.ws.rs.core.Response createNewApplication(javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
Possible failures and behaviors:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit.
ResourceManager: the Router will timeout and contacts another RM.
StateStore: not in the execution.
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public javax.ws.rs.core.Response submitApplication(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ApplicationSubmissionContextInfo newApp, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
Base scenarios:
The Client submits an application to the Router. The Router selects one SubCluster to forward the request. The Router inserts a tuple into StateStore with the selected SubCluster (e.g. SC1) and the appId. The State Store replies with the selected SubCluster (e.g. SC1). The Router submits the request to the selected SubCluster.
In case of State Store failure:
The client submits an application to the Router. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC1) and the appId. Due to the State Store down the Router times out and it will retry depending on the FederationFacade settings. The Router replies to the client with an error message.
If State Store fails after inserting the tuple: identical behavior as
RMWebServices
.
In case of Router failure:
Scenario 1 – Crash before submission to the ResourceManager
The Client submits an application to the Router. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC1) and the appId. The Router crashes. The Client timeouts and resubmits the application. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC2) and the appId. Because the tuple is already inserted in the State Store, it returns the previous selected SubCluster (e.g. SC1). The Router submits the request to the selected SubCluster (e.g. SC1).
Scenario 2 – Crash after submission to the ResourceManager
The Client submits an application to the Router. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC1) and the appId. The Router submits the request to the selected SubCluster. The Router crashes. The Client timeouts and resubmit the application. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC2) and the appId. The State Store replies with the selected SubCluster (e.g. SC1). The Router submits the request to the selected SubCluster (e.g. SC1). When a client re-submits the same application to the same RM, it does not raise an exception and replies with operation successful message.
In case of Client failure: identical behavior as RMWebServices
.
In case of ResourceManager failure:
The Client submits an application to the Router. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC1) and the appId. The Router submits the request to the selected SubCluster. The entire SubCluster is down – all the RMs in HA or the master RM is not reachable. The Router times out. The Router selects a new SubCluster to forward the request. The Router update a tuple into State Store with the selected SubCluster (e.g. SC2) and the appId. The State Store replies with OK answer. The Router submits the request to the selected SubCluster (e.g. SC2).
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppInfo getApp(javax.servlet.http.HttpServletRequest hsr, String appId, Set<String> unselectedFields)
Possible failure:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router will timeout and the call will fail.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
public javax.ws.rs.core.Response updateAppState(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppState targetState, javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException, org.apache.hadoop.yarn.exceptions.YarnException, InterruptedException, IOException
Possible failures and behaviors:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router will timeout and the call will fail.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.yarn.exceptions.YarnException
InterruptedException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppsInfo getApps(javax.servlet.http.HttpServletRequest hsr, String stateQuery, Set<String> statesQuery, String finalStatusQuery, String userQuery, String queueQuery, String count, String startedBegin, String startedEnd, String finishBegin, String finishEnd, Set<String> applicationTypes, Set<String> applicationTags, String name, Set<String> unselectedFields)
Possible failure:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router calls each YARN RM in parallel by using one thread for each YARN RM. In case a YARN RM fails, a single call will timeout. However the Router will merge the ApplicationReports it got, and provides a partial list to the client.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeInfo getNode(String nodeId)
Possible failure:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router will timeout and the call will fail.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodesInfo getNodes(String states)
Possible failure:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router calls each YARN RM in parallel by using one thread for each YARN RM. In case a YARN RM fails, a single call will timeout. However the Router will use the NodesInfo it got, and provides a partial list to the client.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo updateNodeResource(javax.servlet.http.HttpServletRequest hsr, String nodeId, org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceOptionInfo resourceOption)
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo getClusterMetricsInfo()
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppState getAppState(javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException
Possible failure:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router will timeout and the call will fail.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
org.apache.hadoop.security.authorize.AuthorizationException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterInfo get()
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterInfo getClusterInfo()
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterUserInfo getClusterUserInfo(javax.servlet.http.HttpServletRequest hsr)
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo getSchedulerInfo()
public String dumpSchedulerLogs(String time, javax.servlet.http.HttpServletRequest hsr) throws IOException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ActivitiesInfo getActivities(javax.servlet.http.HttpServletRequest hsr, String nodeId, String groupBy)
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppActivitiesInfo getAppActivities(javax.servlet.http.HttpServletRequest hsr, String appId, String time, Set<String> requestPriorities, Set<String> allocationRequestIds, String groupBy, String limit, Set<String> actions, boolean summarize)
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ApplicationStatisticsInfo getAppStatistics(javax.servlet.http.HttpServletRequest hsr, Set<String> stateQueries, Set<String> typeQueries)
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeToLabelsInfo getNodeToLabels(javax.servlet.http.HttpServletRequest hsr) throws IOException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.LabelsToNodesInfo getLabelsToNodes(Set<String> labels) throws IOException
IOException
public javax.ws.rs.core.Response replaceLabelsOnNodes(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeToLabelsEntryList newNodeToLabels, javax.servlet.http.HttpServletRequest hsr) throws IOException
IOException
public javax.ws.rs.core.Response replaceLabelsOnNode(Set<String> newNodeLabelsName, javax.servlet.http.HttpServletRequest hsr, String nodeId) throws Exception
Exception
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo getClusterNodeLabels(javax.servlet.http.HttpServletRequest hsr) throws IOException
IOException
public javax.ws.rs.core.Response addToClusterNodeLabels(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo newNodeLabels, javax.servlet.http.HttpServletRequest hsr) throws Exception
Exception
public javax.ws.rs.core.Response removeFromCluserNodeLabels(Set<String> oldNodeLabels, javax.servlet.http.HttpServletRequest hsr) throws Exception
Exception
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo getLabelsOnNode(javax.servlet.http.HttpServletRequest hsr, String nodeId) throws IOException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppPriority getAppPriority(javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.security.authorize.AuthorizationException
public javax.ws.rs.core.Response updateApplicationPriority(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppPriority targetPriority, javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException, org.apache.hadoop.yarn.exceptions.YarnException, InterruptedException, IOException
org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.yarn.exceptions.YarnException
InterruptedException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppQueue getAppQueue(javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.security.authorize.AuthorizationException
public javax.ws.rs.core.Response updateAppQueue(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppQueue targetQueue, javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException, org.apache.hadoop.yarn.exceptions.YarnException, InterruptedException, IOException
org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.yarn.exceptions.YarnException
InterruptedException
IOException
public javax.ws.rs.core.Response postDelegationToken(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.DelegationToken tokenData, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException, Exception
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
Exception
public javax.ws.rs.core.Response postDelegationTokenExpiration(javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException, Exception
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
Exception
public javax.ws.rs.core.Response cancelDelegationToken(javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException, Exception
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
Exception
public javax.ws.rs.core.Response createNewReservation(javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public javax.ws.rs.core.Response submitReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationSubmissionRequestInfo resContext, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public javax.ws.rs.core.Response updateReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationUpdateRequestInfo resContext, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public javax.ws.rs.core.Response deleteReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDeleteRequestInfo resContext, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public javax.ws.rs.core.Response listReservation(String queue, String reservationId, long startTime, long endTime, boolean includeResourceAllocations, javax.servlet.http.HttpServletRequest hsr) throws Exception
Exception
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutInfo getAppTimeout(javax.servlet.http.HttpServletRequest hsr, String appId, String type) throws org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.security.authorize.AuthorizationException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutsInfo getAppTimeouts(javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.security.authorize.AuthorizationException
public javax.ws.rs.core.Response updateApplicationTimeout(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutInfo appTimeout, javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException, org.apache.hadoop.yarn.exceptions.YarnException, InterruptedException, IOException
org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.yarn.exceptions.YarnException
InterruptedException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppAttemptsInfo getAppAttempts(javax.servlet.http.HttpServletRequest hsr, String appId)
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.RMQueueAclInfo checkUserAccessToQueue(String queue, String username, String queueAclType, javax.servlet.http.HttpServletRequest hsr)
public org.apache.hadoop.yarn.server.webapp.dao.AppAttemptInfo getAppAttempt(javax.servlet.http.HttpServletRequest req, javax.servlet.http.HttpServletResponse res, String appId, String appAttemptId)
req
- the servlet requestres
- the servlet responseappId
- the application we want to get the appAttempt. It is a
PathParam.appAttemptId
- the AppAttempt we want to get the info. It is a
PathParam.WebServices.getAppAttempt(HttpServletRequest, HttpServletResponse,
String, String)
public org.apache.hadoop.yarn.server.webapp.dao.ContainersInfo getContainers(javax.servlet.http.HttpServletRequest req, javax.servlet.http.HttpServletResponse res, String appId, String appAttemptId)
req
- the servlet requestres
- the servlet responseappId
- the application we want to get the containers info. It is a
PathParam.appAttemptId
- the AppAttempt we want to get the info. It is a
PathParam.WebServices.getContainers(HttpServletRequest, HttpServletResponse,
String, String)
public org.apache.hadoop.yarn.server.webapp.dao.ContainerInfo getContainer(javax.servlet.http.HttpServletRequest req, javax.servlet.http.HttpServletResponse res, String appId, String appAttemptId, String containerId)
req
- the servlet requestres
- the servlet responseappId
- the application we want to get the containers info. It is a
PathParam.appAttemptId
- the AppAttempt we want to get the info. It is a
PathParam.containerId
- the container we want to get the info. It is a
PathParam.WebServices.getContainer(HttpServletRequest, HttpServletResponse,
String, String, String)
public void setNextInterceptor(RESTRequestInterceptor next)
AbstractRESTRequestInterceptor
RESTRequestInterceptor
in the chain.setNextInterceptor
in interface RESTRequestInterceptor
setNextInterceptor
in class AbstractRESTRequestInterceptor
next
- the RESTRequestInterceptor to set in the pipelinepublic javax.ws.rs.core.Response signalToContainer(String containerId, String command, javax.servlet.http.HttpServletRequest req)
public void shutdown()
AbstractRESTRequestInterceptor
RESTRequestInterceptor
.shutdown
in interface RESTRequestInterceptor
shutdown
in class AbstractRESTRequestInterceptor
Copyright © 2008–2023 Apache Software Foundation. All rights reserved.