Saturday, April 4, 2026

Getting Started with LocalStack and AWS S3Tables

Jupyter Notebook — generated with runcell
 

Getting Started with LocalStack and AWS S3Tables

 
`
export LOCALSTACK_AUTH_TOKEN=""

mkdir -p .localstack-volume
docker rm -f localstack-main 2>/dev/null || true

docker run -d \
--name localstack-main \
-p 4566:4566 \
-p 4510-4559:4510-4559 \
-e LOCALSTACK_AUTH_TOKEN \
-e SERVICES=s3tables \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$(pwd)/.localstack-volume:/var/lib/localstack" \
localstack/localstack
`
In [32]:
! java --version
openjdk 11.0.25 2024-10-15 OpenJDK Runtime Environment Homebrew (build 11.0.25+0) OpenJDK 64-Bit Server VM Homebrew (build 11.0.25+0, mixed mode)
 
#### Export ENV Variable
In [33]:
import os
os.environ["AWS_ACCESS_KEY_ID"] = "test"
os.environ["AWS_SECRET_ACCESS_KEY"] = "test"
os.environ["AWS_DEFAULT_REGION"] = "us-east-1"
os.environ['AWS_ENDPOINT_URL']="http://localhost.localstack.cloud:4566"
os.environ["AWS_PROFILE"] = "localstack"
 

Create S3Tables

In [34]:
! aws  s3tables list-table-buckets
{ "tableBuckets": [] }
In [35]:
! aws s3tables create-table-bucket --name my-table-bucket
{ "arn": "arn:aws:s3tables:us-east-1:000000000000:bucket/my-table-bucket" }
 

🧪 Step 1 — Add Iceberg + AWS Packages

In [36]:
def iceberg_runtime_package():
    import pyspark
    major, minor = map(int, pyspark.__version__.split(".")[:2])
    scala = "2.12" if major < 4 else "2.13"
    iceberg_ver = "1.10.0"
    return f"org.apache.iceberg:iceberg-spark-runtime-{major}.{minor}_{scala}:{iceberg_ver}"

ICEBERG_RUNTIME = iceberg_runtime_package()

packages = ",".join([
    "com.amazonaws:aws-java-sdk-bundle:1.12.661",
    "org.apache.hadoop:hadoop-aws:3.3.4",
    "software.amazon.awssdk:bundle:2.29.38",
    "com.github.ben-manes.caffeine:caffeine:3.1.8",
    "org.apache.commons:commons-configuration2:2.11.0",
    "software.amazon.s3tables:s3-tables-catalog-for-iceberg:0.1.8",
    ICEBERG_RUNTIME,
])

os.environ["PYSPARK_SUBMIT_ARGS"] = f"--packages {packages} pyspark-shell"
print("Using Iceberg package:", ICEBERG_RUNTIME)
Using Iceberg package: org.apache.iceberg:iceberg-spark-runtime-3.4_2.12:1.10.0
 

🧪 Step 2 — Create Spark Session


In [37]:
from pyspark.sql import SparkSession
import os

# Config (ensure these are already set)
TABLE_BUCKET_ARN = "arn:aws:s3tables:us-east-1:000000000000:bucket/my-table-bucket"
CATALOG_NAME = "ManagedIcebergCatalog"

spark = (
    SparkSession.builder.appName("LocalStackS3TablesIceberg")
    .config("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
    .config(f"spark.sql.catalog.{CATALOG_NAME}", "org.apache.iceberg.spark.SparkCatalog")
    .config(f"spark.sql.catalog.{CATALOG_NAME}.catalog-impl", "software.amazon.s3tables.iceberg.S3TablesCatalog")
    .config(f"spark.sql.catalog.{CATALOG_NAME}.warehouse", TABLE_BUCKET_ARN)
    .config(f"spark.sql.catalog.{CATALOG_NAME}.s3.endpoint", os.getenv('AWS_ENDPOINT_URL'))  # <-- fixed
    .config(f"spark.sql.catalog.{CATALOG_NAME}.s3.path-style-access", "true")
    .config(f"spark.sql.catalog.{CATALOG_NAME}.s3.access-key-id", os.getenv("AWS_ACCESS_KEY_ID"))
    .config(f"spark.sql.catalog.{CATALOG_NAME}.s3.secret-access-key", os.getenv("AWS_SECRET_ACCESS_KEY"))
    .getOrCreate()
)

print("✅ Spark Session Created")
✅ Spark Session Created
 

Perform Actions

In [38]:
cat = CATALOG_NAME
ns = "my_namespace"
table = "demo_table1"

spark.sql(f"CREATE NAMESPACE IF NOT EXISTS {CATALOG_NAME}.{ns}")
print("[ok] CREATE NAMESPACE IF NOT EXISTS", f"{CATALOG_NAME}.{ns}")
[ok] CREATE NAMESPACE IF NOT EXISTS ManagedIcebergCatalog.my_namespace
In [39]:
spark.sql(
        f"""
        CREATE TABLE IF NOT EXISTS {CATALOG_NAME}.{ns}.{table} (
            id STRING,
            name STRING,
            region STRING
        )
        USING iceberg
        PARTITIONED BY (region)
        TBLPROPERTIES ('format-version' = '2')
        """
    ).show()
++ || ++ ++
In [40]:
! aws  s3tables list-table-buckets
{ "tableBuckets": [ { "arn": "arn:aws:s3tables:us-east-1:000000000000:bucket/my-table-bucket", "name": "my-table-bucket", "ownerAccountId": "000000000000", "createdAt": "2026-04-04T14:16:27.393500+00:00" } ] }
In [41]:
! aws  s3tables list-tables --table-bucket-arn arn:aws:s3tables:us-east-1:000000000000:bucket/my-table-bucket
{ "tables": [ { "namespace": [ "my_namespace" ], "name": "demo_table1", "type": "customer", "tableARN": "arn:aws:s3tables:us-east-1:000000000000:bucket/my-table-bucket/table/demo_table1", "createdAt": "2026-04-04T14:17:29.839974+00:00", "modifiedAt": "2026-04-04T14:17:29.839974+00:00" } ] }
 
In [42]:
spark.sql(
        f"""
        INSERT INTO {cat}.{ns}.{table} (id, name, region) VALUES
        ('1', 'Alice', 'us-east-1'),
        ('2', 'Bob', 'eu-west-1')
        """
    )
print("[ok] INSERT INTO ... (id, name, region) VALUES ...")

spark.sql(f"SELECT * FROM {cat}.{ns}.{table} ORDER BY id").show(truncate=False)
print("[ok] SELECT * ORDER BY id")
[ok] INSERT INTO ... (id, name, region) VALUES ... +---+-----+---------+ |id |name |region | +---+-----+---------+ |1 |Alice|us-east-1| |2 |Bob |eu-west-1| +---+-----+---------+ [ok] SELECT * ORDER BY id
In [ ]:
Exported with runcell — convert notebooks to HTML or PDF anytime at runcell.dev.

No comments:

Post a Comment

Getting Started with LocalStack and AWS S3Tables

Jupyter Notebook — generated with runcell   Getti...