diff --git a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
index 05ed55219c5b2..3a1272deaa50e 100644
--- a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
+++ b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -23,7 +23,9 @@ heapster monitoring will be turned-on by default).
 
 To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster
 and kubectl at version 1.6 or later.  Furthermore, in order to make use of custom metrics, your cluster
-must be able to communicate with the API server providing the custom metrics API.
+must be able to communicate with the API server providing the custom metrics API. Finally, to use metrics
+not related to any Kubernetes object you must have a Kubernetes cluster at version 1.10 or later, and
+you must be able to communicate with the API server that provides the external metrics API.
 See the [Horizontal Pod Autoscaler user guide](/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics) for more details.
 
 ## Step One: Run & expose php-apache server
@@ -267,6 +269,37 @@ Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was con
 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route
 Ingress were serving a total of 10000 requests per second.
 
+### Autoscaling on metrics not related to Kubernetes objects
+
+Applications running on Kubernetes may need to autoscale based on metrics that don't have an obvious
+relationship to any object in the Kubernetes cluster, such as metrics describing a hosted service with
+no direct correlation to Kubernetes namespaces. In Kubernetes 1.10 and later, you can address this use case
+with *external metrics*.
+
+Using external metrics requires a certain level of knowledge of your monitoring system, and it requires a cluster
+monitoring setup similar to one required for using custom metrics. With external metrics, you can autoscale
+based on any metric available in your monitoring system by providing a `metricName` field in your
+HorizontalPodAutoscaler manifest. Additionally you can use a `metricSelector` field to limit which
+metrics' time series you want to use for autoscaling. If multiple time series are matched by `metricSelector`,
+the sum of their values is used by the HorizontalPodAutoscaler.
+
+For example if your application processes tasks from a hosted queue service, you could add the following
+section to your HorizontalPodAutoscaler manifest to specify that you need one worker per 30 outstanding tasks.
+
+```yaml
+- type: External
+  external:
+    metricName: queue_messages_ready
+    metricSelector:
+      matchLabels:
+        queue: worker_tasks
+    targetAverageValue: 30
+```
+
+If your metric describes work or resources that can be divided between autoscaled pods the `targetAverageValue`
+field describes how much of that work each pod can handle. Instead of using the `targetAverageValue` field, you could use the
+`targetValue` to define a desired value of your external metric.
+
 ## Appendix: Horizontal Pod Autoscaler Status Conditions
 
 When using the `autoscaling/v2beta1` form of the HorizontalPodAutoscaler, you will be able to see
diff --git a/docs/tasks/run-application/horizontal-pod-autoscale.md b/docs/tasks/run-application/horizontal-pod-autoscale.md
index a9cc25f6472d9..728e5d553e5b2 100644
--- a/docs/tasks/run-application/horizontal-pod-autoscale.md
+++ b/docs/tasks/run-application/horizontal-pod-autoscale.md
@@ -157,13 +157,15 @@ To use custom metrics with your Horizontal Pod Autoscaler, you must set the nece
 
 * [Enable the API aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) if you have not already done so.
 
-* Register your resource metrics API and your
-custom metrics API with the API aggregation layer. Both of these API servers must be running *on* your cluster.
+* Register your resource metrics API, your
+custom metrics API and, optionally, external metrics API with the API aggregation layer. All of these API servers must be running *on* your cluster.
 
   * *Resource Metrics API*: You can use Heapster's implementation of the resource metrics API, by running Heapster with its `--api-server` flag set to true.
 
   * *Custom Metrics API*: This must be provided by a separate component. To get started with boilerplate code, see the [kubernetes-incubator/custom-metrics-apiserver](https://github.com/kubernetes-incubator/custom-metrics-apiserver) and the [k8s.io/metrics](https://github.com/kubernetes/metrics) repositories.
 
+  * *External Metrics API*: Starting from Kubernetes 1.10 you can use this API if you need to autoscale on metrics not related to any Kubernetes object. Similarly to *Custom Metrics API* this must be provided by a separate component.
+
 * Set the appropriate flags for kube-controller-manager:
 
   * `--horizontal-pod-autoscaler-use-rest-clients` should be true.