Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: Fix import path of the PySparkProcessor #4981

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions doc/amazon_sagemaker_processing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ First you need to create a :class:`PySparkProcessor` object

.. code:: python

from sagemaker.processing import PySparkProcessor, ProcessingInput
from sagemaker.spark.processing import PySparkProcessor

spark_processor = PySparkProcessor(
base_job_name="sm-spark",
Expand Down Expand Up @@ -157,11 +157,14 @@ To successfully run the history server, first you need to make sure ``docker`` i
SparkJarProcessor
---------------------

Supposed that you have the jar file "preprocessing.jar" stored in the same directory as you are now, and the java package is ``com.path.to.your.class.PreProcessing.java``
Here's an example of using PySparkProcessor.
Suppose that you have the jar file "preprocessing.jar" stored in the same directory as you are now, and the java package is ``com.path.to.your.class.PreProcessing.java``.

Here's an example of using SparkJarProcessor.

.. code:: python

from sagemaker.spark.processing import SparkJarProcessor

spark = SparkJarProcessor(
base_job_name="sm-spark-java",
image_uri=beta_image_uri,
Expand Down
Loading