Example Usage
Payload Code
On the satellite, the producer will typically fetch data from the bus, perform some observation, post-process the recorded data (e.g., to remove duplicate data points or run custom compression), and then send the file.
The data pipeline will NOT compress the data in flight, the burden is on the sender to compress the data, but it gives them the option to use a compressor that has been tuned to the specific data that is transferred.
- Python
import os
from oort_sdk_client import SdkApi
from oort_sdk_client.models import (
SendFileRequest, SendOptions, TTLParams, RetrieveFileRequest
)
from custom_application import (
observe, process, save_to_file, create_logs
)
# these topic names will be provided by the Spire Constellation Ops team
topic_primary = "custom-application"
topic_logs = "custom-application-logs"
topic_raw = "custom-application-raw"
topic_uplink = "custom-application-uploads"
agent = SdkApi()
# download files that are available in the agent's inbox. Please note that this is not production-level code, and is
# typically done as part of the signaling configure step. Please refer to the examples in the customer documentation that
# has been provided separely
available_files = agent.query_available_files(topic_uplink).files
for new_file_info in available_files:
# Delivery Hints are set when uploading a file through the Tasking API
dest_path = new_file_info.delivery_hints.dest_path
mode = int(f"0o{new_file_info.delivery_hints.mode}", 8)
# The oort agent will not create intermediate directories when extracting a file, so we need to create them ourselves
os.makedirs(os.path.dirname(dest_path), mode=0o755, exist_ok=True)
req = RetrieveFileRequest(
id=new_file_info.id,
save_path=dest_path)
rfinfo = agent.retrieve_file(req)
# We will set the file mode based on the delivery hints
os.chmod(dest_path, mode)
while True:
raw_observation = observe()
# on-board processing may be done to extract the most important data
processed_observation = process(raw_observation)
raw_filename = save_to_file(raw_observation)
processed_filename = save_to_file(processed_observation)
# send the important processed data with default options
req = SendFileRequest(
destination="ground",
topic=topic_primary,
filepath=processed_filename,
options=SendOptions())
resp = agent.send_file(req)
# logfiles may be very useful, but not as critical as the important
# data observations. Send those as "bulk" data
# The hypothetical "create_logs" method would write any log files
# in progress, and return a list of their filenames.
log_files = create_logs()
ttl = TTLParams(urgent=0, bulk=86400)
options = SendOptions(ttl_params=ttl)
for file in log_files:
req = SendFileRequest(
destination="ground",
topic=topic_logs,
filepath=file,
options=options
)
agent.send_file(req)
# the raw data may be much larger, but still useful if there is
# time to transmit it. This data can be sent as "surplus"
ttl = TTLParams(urgent=0, bulk=0, surplus=(86400 * 7))
options = SendOptions(ttl_params=ttl)
req = SendFileRequest(
destination="ground",
topic=topic_raw,
filepath=raw_filename,
options=options
)
agent.send_file(req)
Ground-side
After the data pipeline has transferred sent files to the ground, they are stored in S3 buckets, where they can be retrieved for further processing. The files will be delivered in the original format they were sent in, so if they were sent compressed, they will be stored compressed.
The topic the file was sent to determines the specific S3 bucket that a file will be uploaded to.