Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
ShdwDrive Developer Tools
shdwDrive v1.5 is no longer maintained. Please migrate to v2 and consult the new developer guide for instructions.
Install the shdwDrive CLI
Follow the CLI Guide
After installing Solana, make sure you have both SHDW and SOL in your wallet in order to reserve storage.
JavaScript, Rust, and Python are your choices.
https://github.com/GenesysGo/shdw-drive-bug-reports
We adhere to a responsible disclosure process for security related issues. To ensure the responsible disclosure and handling of security vulnerabilities, we ask that you follow the process outlined below.
Please provide a clear and concise description of the issue, steps to reproduce it, and any relevant screenshots or logs.
Important: For security-related issues, please include as much information as possible for reproduction and what its related to. Please be sure to use the report a security vulnerability feature in the repository listed above. If you submit a security vulnerability as a public bug report, we reserve the right to remove the report and move any communications to private channels until a resolution is made.
Security related issues should only be reported through this repository.
While we strongly encourage the use of this repository for bug reports and security issues, you may also reach out to us via our Discord server. Join the #shdw-drive-technical-support channel for assistance. However, please note that we will redirect you to submit the bug report through this GitHub repository for proper handling and tracking.
CLI
Get started within minutes
API
Instant interaction
SDKs
Advanced applications
These are the official social media accounts for GenesysGo and the shdwEcosystem.
shdwDrive v1.5 is no longer maintained. Please migrate to v2 and consult the new developer guide for instructions.
Our SDK options provide the simplest means of linking your application to shdwDrive. With a variety of environments to choose from, developers can enjoy a robust and constantly evolving platform that leverages the full potential of shdwDrive. GenesysGo is dedicated to maintaining these SDKs, continuously enhancing developer capabilities, and streamlining the building process. We value your feedback and welcome any suggestions to help us improve these valuable resources.
Direct Download | App Store Link (coming soon) | Play Store Link (coming soon)
shdwDrive v2 transforms cloud storage into a decentralized ecosystem where users can not only store their files securely but also participate in and earn from the network. Our platform eliminates traditional centralized storage providers, replacing them with a community-driven network where everyone can contribute and benefit.
shdwDrive creates a marketplace where:
Users get reliable, secure decentralized storage
SHDW tokens allows shdwOperators to provide storage capacity
The community shapes the network's future
Everyone benefits from transparent, mathematically-proven fairness
Read about shdwDrive v2 Economics
Want to dive deeper into how shdwDrive works? Check out our technical deep dive to learn about the network's economic model, proof systems, and mathematical foundations that ensure fair rewards for all participants.
Your complete resource for using shdwDrive v2 – whether you're storing files or earning rewards as a network operator. Now that we have launched our latest technology, stay tuned for frequent updates to our guides!
This section is being re-designed as we support the latest shdwDrive v2! What remains are previous versioning guides that focus on easy and actionable information to help you quickly get started building on the previous version 1.5 (soon deprecated) shdwDrive. Step by step guides, and line by line CLI instruction offer developers the quickest path to working concepts.
Much like an appendix, the reference section is a quick navigation hub for key resources. Our media kit, social media presence, and more can be found here.
This resource will be updated frequently and the developer community should feel empowered to submit Issues for edits they would like to see or platform enhancement ideas. We also have a process for Bug submissions.
shdwDrive v1.5 is no longer maintained. Please migrate to v2 and consult the new for instructions.
In order to use the S3-Compatible gateway for the shdwDrive network, head over to , connect your wallet and sign in, and then navigate to the storage account you'd like to generate an access key/secret key pair. Select the "Addons" tab and then you'll be able to enable & add s3-compatible key/secret pairs.
Once you've enabled the S3 Access addon for a given storage account, you can view and add more s3-compatible key/secret pairs by navigating to the storage account of your choosing on .
To get your credentials, simply click "View Credentials".
Once enabled, you can manage individual permissions of each key/secret pair you generate for a given storage account. The list of permissions are configured to be compatible with existing s3 clients. Some examples of compatible s3 clients are rclone, s3cmd, and even the aws s3 cli. There's a plethora of tooling that exists for s3-compatible gateways, which is why we chose to build this into the shdwDrive network.
In general, if you want to enable uploads with one of your keys, you'll need to enable the following permissions:
You can also control if a key is able to read, which allows you to have read, write, and read+write keys for various use cases.
In order to access the s3 compatible gateways on the shdwDrive network, you'll need to configure your s3 client to use one of the following gateways:
https://us.shadow.cloud
https://eu.shadow.cloud
These endpoints proxy requests to the shdwDrive network and allows you to have the best network connectivity possible given your geographical preference. All uploads are synced and available globally, so you can use either endpoint.
The following is an example that you can add to your rclone configuration file. Typically, this is located in `~/.config/rclone/rclone.conf`.
In order to ensure the stability of the shdwDrive network, speeds are initially limited to 20 MiB/s. You can opt to upgrade the bandwidth rate limit for each individual key/secret pair you generate. The upgraded rate limit is 40 MiB/s
In the event a s3 key has been compromised, you can easily rotate the key. Simply navigate to , connect your wallet, select a storage account, and then click on the "Addons" tab. From there, you can click "Rotate" button to rotate a given key/secret pair to generate a fresh pair.
| Solana App Store Link (coming soon) | Play Store Link (coming soon)
As a User:
the shdwDrive mobile app
Connect your Solana wallet
Choose a storage plan (start with 5GB free!)
Create your first bucket
Start uploading your files securely
As an Operator:
As of release 1.0.8, gossip tickets have been refreshed. All current operators must verify their ticket has correctly updated after upgrade/install.
the shdwDrive mobile app
properly
Navigate to the Operator tab
Select your storage level
Acquire a valid
Connect your shdwNode!
As a Developer:
- integrate shdwDrive in your app
Need specific information? Jump to:
- New to shdwDrive? Start here
- Learn how to participate in the network
- integrate shdwDrive with your app
- Dive into the network architecture
- Dive deeper into how the protocol works
Have questions? Our comprehensive sections below cover everything you need to know about using and operating on shdwDrive v2. Have feedback? Let us know .
Get Object, Put Object, List Multipart Upload Parts, Abort Multipart upload
[shdw-cloud]
type = s3
provider = Other
access_key_id = [redacted]
secret_access_key = [redacted]
endpoint = https://us.shadow.cloud
acl = public-read
bucket_acl = public-read
Javascript
Rust
Python
Get the App
Guides
Reference
Official SHDW and GenesysGo podcast appearances and articles
sshdwDrive v1.5 is no longer maintained. Please migrate to v2 and consult the new for instructions.
Prerequisites: Install on any OS.
Then run the following command
After you install ShdwDrive, learn how to use the .
If you prefer, here is a step by step .
Further streamline your integrations with ShdwDrive using the and .
Review the
Visit us in for support and feedback!
We adhere to a responsible disclosure process for security related issues. To ensure the responsible disclosure and handling of security vulnerabilities, we ask that you follow the process outlined below.
For non-security-related bugs, please submit a new bug report . For security-related reports, please open a "security vulnerability" report .
Please provide a clear and concise description of the issue, steps to reproduce it, and any relevant screenshots or logs.
Important: For security-related issues, please include as much information as possible for reproduction and what its related to. Please be sure to use the report a security vulnerability feature in the repository listed above. If you submit a security vulnerability as a public bug report, we reserve the right to remove the report and move any communications to private channels until a resolution is made.
Security related issues should only be reported through this repository.
While we strongly encourage the use of this repository for bug reports and security issues, you may also reach out to us via our server. Join the #shdw-drive-technical-support channel for assistance. However, please note that we will redirect you to submit the bug report through this GitHub repository for proper handling and tracking.
npm install -g @shadow-drive/cli
shdwDrive v1.5 is no longer maintained. Please migrate to v2 and consult the new developer guide for instructions.
pip install shadow-drive
Also for running the examples:
pip install solders
Check out the examples
directory for a demonstration of the functionality.
https://shdw-drive.genesysgo.net/[STORAGE_ACCOUNT_ADDRESS]
from shadow_drive import ShadowDriveClient
from solders.keypair import Keypair
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--keypair', metavar='keypair', type=str, required=True,
help='The keypair file to use (e.g. keypair.json, dev.json)')
args = parser.parse_args()
# Initialize client
client = ShadowDriveClient(args.keypair)
print("Initialized client")
# Create account
size = 2 ** 20
account, tx = client.create_account("full_test", size, use_account=True)
print(f"Created storage account {account}")
# Upload files
files = ["./files/alpha.txt", "./files/not_alpha.txt"]
urls = client.upload_files(files)
print("Uploaded files")
# Add and Reduce Storage
client.add_storage(2**20)
client.reduce_storage(2**20)
# Get file
current_files = client.list_files()
file = client.get_file(current_files[0])
print(f"got file {file}")
# Delete files
client.delete_files(urls)
print("Deleted files")
# Delete account
client.delete_account(account)
print("Closed account")
This package uses PyO3 to build a wrapper around the official ShdwDrive Rust SDK. For more information, see the Rust SDK documentation.
Section under development.
D.A.G.G.E.R. - Launch of - January 16th
D.A.G.G.E.R. - Launch of - September 29, 2023
shdwDrive - Release S3 Compatible Gateway - August 29, 2023
shdwDrive
shdwDrive Rust
shdwDrive
shdwDrive
Apr 5, 2023- shdwDrive Rust
Mar 16, 2023 - shdwDrive
Mar 15, 2023 - shdwDrive Rust
Feb 28, 2023 - shdwDrive Rust
Feb 27, 2023 - shdwDrive CLI
Feb 27, 2023 - shdwDrive
Feb 27, 2023 - shdwDrive
Feb 9, 2023 - shdwDrive
Feb 8, 2023 - shdwDrive
Feb 9, 2023 - shdwDrive CLI
Dec 13, 2022 - shdwDrive CLI
Nov 28, 2022 - shdwDrive
Nov 28, 2022 - shdwDrive
Sep 22, 2022 - shdwDrive CLI
Sep 22, 2022 - shdwDrive CLI
Sep 22, 2022 - shdwDrive CLI
Sep 21, 2022 - shdwDrive CLI
Sep 21, 2022 - shdwDrive
Sep 21, 2022 - shdwDrive
Sep 21, 2022 - shdwDrive CLI
Aug 26, 2022 - Digital Asset RPC Infrastructure
Jul 26, 2022 - shdwDrive
Jul 22, 2022 - shdwDrive
Jul 12, 2022 - shdwDrive
Jul 8, 2022 - shdwDrive
shdwDrive SDK
shdwDrive CLI
shdwDrive SDK is a typeScript SDK for interacting with ShdwDrive, providing simple and efficient methods for file operations on the decentralized storage platform.
# Install from npm
npm install @shdwdrive/sdk
# Or install from repository
git clone https://github.com/GenesysGo/shdwdrive-v2-sdk.git
cd shdwdrive-v2-sdk
npm install
npm run build
cd shdwdrive-v2-sdk
npm link
cd your-project
npm link @shdw-drive/sdk
📤 File uploads (supports both small and large files)
📥 File deletion
📋 File listing
📊 Bucket usage statistics
🗂️ Folder creation and management
🔐 Secure message signing
⚡ Progress tracking for uploads
🔄 Multipart upload support for large files
import ShdwDriveSDK from '@shdwdrive/sdk';
// Initialize with wallet
const drive = new ShdwDriveSDK({}, { wallet: yourWalletAdapter });
// Or initialize with keypair
const drive = new ShdwDriveSDK({}, { keypair: yourKeypair });
const file = new File(['Hello World'], 'hello.txt', { type: 'text/plain' });
const uploadResponse = await drive.uploadFile('your-bucket', file, {
onProgress: (progress) => {
console.log(`Upload progress: ${progress.progress}%`);
}
});
console.log('File uploaded:', uploadResponse.finalized_location);
const folderResponse = await drive.createFolder('your-bucket', 'folder-name');
console.log('Folder created:', folderResponse.folder_location);
const deleteFolderResponse = await drive.deleteFolder('your-bucket', 'folder-url');
console.log('Folder deleted:', deleteFolderResponse.success);
const files = await drive.listFiles('your-bucket');
console.log('Files in bucket:', files);
const deleteResponse = await drive.deleteFile('your-bucket', 'file-url');
console.log('Delete status:', deleteResponse.success);
ShdwDriveSDK
Constructor Options
interface ShdwDriveConfig {
endpoint?: string; // Optional custom endpoint (defaults to https://v2.shdwdrive.com)
}
// Initialize with either wallet or keypair
new ShdwDriveSDK(config, { wallet: WalletAdapter });
new ShdwDriveSDK(config, { keypair: Keypair });
Methods
uploadFile(bucket: string, file: File, options?: FileUploadOptions)
deleteFile(bucket: string, fileUrl: string)
listFiles(bucket: string)
getBucketUsage(bucket: string)
createFolder(bucket: string, folderName: string)
deleteFolder(bucket: string, folderUrl: string)
A command-line interface for interacting with shdwDrive storage.
📤 File uploads (supports both small and large files)
📁 Folder support (create, delete, and manage files in folders)
📥 File and folder deletion
📋 File listing
📊 Bucket usage statistics
🔐 Secure message signing
🔄 Multipart upload support for large files
You can install the CLI globally using npm:
npm install -g @shdwdrive/cli
Or use it directly from the repository:
git clone https://github.com/genesysgo/shdwdrive-v2-cli.git
cd shdwdrive-v2-cli
npm install
npm run build
npm link
The CLI uses environment variables for configuration:
SHDW_ENDPOINT
: The shdwDrive API endpoint (defaults to https://v2.shdwdrive.com)
shdw-drive upload \
--keypair ~/.config/solana/id.json \
--bucket your-bucket-identifier \
--file path/to/your/file.txt \
--folder optional/folder/path
# Delete a file from root of bucket
shdw-drive delete \
--keypair ~/.config/solana/id.json \
--bucket your-bucket-identifier \
--file filename.txt
# Delete a file from a folder
shdw-drive delete \
--keypair ~/.config/solana/id.json \
--bucket your-bucket-identifier \
--file folder/subfolder/filename.jpg
shdw-drive create-folder \
--keypair ~/.config/solana/id.json \
--bucket your-bucket-identifier \
--name my-folder/subfolder
shdw-drive list \
--keypair ~/.config/solana/id.json \
--bucket your-bucket-identifier
shdw-drive usage \
--keypair ~/.config/solana/id.json \
--bucket your-bucket-identifier
-k, --keypair
- Path to your Solana keypair file
-b, --bucket
- Your bucket identifier
-f, --file
- Path to the file you want to upload
-F, --folder
- (Optional) Folder path within the bucket
-k, --keypair
- Path to your Solana keypair file
-b, --bucket
- Your bucket identifier
-f, --file
- URL or path of the file to delete
-k, --keypair
- Path to your Solana keypair file
-b, --bucket
- Your bucket identifier
-n, --name
- Name/path of the folder to create
-k, --keypair
- Path to your Solana keypair file
-b, --bucket
- Your bucket identifier
-p, --path
- Path of the folder to delete
Clone the repository:
git clone https://github.com/genesysgo/shdwdrive-v2-cli.git
Install dependencies:
cd shdwdrive-v2-cli
npm install
Build the project:
npm run build
Link the CLI locally:
npm link
shdwDrive is a decentralized mobile storage platform that allows you to store files securely while also providing opportunities to participate in the network as an operator.
Q: How do I start using shdwDrive? A: Getting started is simple:
Download and install the shdwDrive mobile app on your Android device
Connect your wallet
Create a storage bucket or choose a storage plan
Begin uploading your files
Q: Do I need a specific wallet to use shdwDrive? A: Yes, shdwDrive requires a Solana-compatible wallet. The app will guide you through connecting your preferred wallet during setup.
Q: What happens after I connect my wallet? A: After connecting your wallet, you'll be guided through:
A brief onboarding process
Options to create storage space
Access to the main dashboard where you can manage files and operator settings
Q: What is a bucket? A: A bucket is your personal storage space on shdwDrive where you can store and organize your files. Think of it as your private folder in the decentralized network.
Q: How do I create a bucket? A: To create a bucket:
Open the shdwDrive app
Look for the "Create Bucket" button on the home screen
Follow the prompts to set up your new storage space
Q: Is there an iOS app? A: No, not at this time. Android dominates global market share with approximately 70-75% of all smartphones worldwide, while iOS (iPhone) accounts for about 25-30%. We will complete the majority of our feature rollout on Android first, refining the user and operator experience, before moving to support the iOS device family.
Q: Why do I need to authorize each file upload separately? A: Currently, each file upload requires a separate authorization through your wallet for security purposes. This is an intentional security feature that:
Ensures proper authorization of all file operations
Prevents unauthorized bulk uploads
Maintains a clear record of file ownership
Protects your data and storage space
Allows you to monitor and control your storage usage
Enables us to better iterate the rolling out of more robust handling
Q: Can I upload multiple files at once? A: At this time, files must be uploaded individually. Each upload requires:
Selecting the file
Authorizing the transaction through your wallet
Waiting for confirmation This process ensures proper tracking and verification of each file upload. Future updates will be introduce batch upload features while maintaining security standards.
Q: What happens if my upload is interrupted? A: If an upload is interrupted:
The partial upload is automatically cancelled
No storage space is consumed
No transaction fee is charged
You can simply restart the upload Always ensure a stable connection when uploading larger files.
Q: Is there a free storage option? A: Yes, shdwDrive offers a free 5GB storage plan to get started.
Q: What can I store in shdwDrive? A: You can store various file types including:
Photos and images
Documents
Videos
Custom folders and file structures
Q: Can I see how much storage I'm currently using? A: Yes, the app displays:
Your current storage usage
Available space in your bucket
Storage contribution level (if you're an operator)
Visual indicators of space usage
Standard Android app level information is also accessible
As of release 1.0.8, gossip tickets have been refreshed. All current operators must verify their ticket has correctly updated after upgrade/install.
Q: What is a shdwDrive Operator? A: A shdwDrive Operator contributes storage space from their Android device to the decentralized network. Operators provide real utility by making their device's unused storage available for secure file storage.
Q: What are the benefits of becoming an operator? A: As an operator, you:
Earn real revenue in USDC from user storage fees (in future releases)
Participate in a decentralized storage network
Contribute to network infrastructure
Receive programmatic revenue sharing based on your contribution
Q: What's the difference between being a storage user and an operator? A: Key differences:
Users pay for storage services in USDC
Operators provide storage capacity to the network
SHDW tokens serve as operator collateral
Operators earn revenue from actual storage fees
Q: What do I need to become an operator? A: To become an operator, you need:
An Android device (version 12.0L or higher)
Sufficient free storage space (minimum varies by contribution level)
Stable internet connection (WiFi required at this time)
A "Join Ticket" for network access
SHDW tokens for network collateral
Proper settings for the Android OS and app
Q: What are the recommended device requirements to run shdwDrive? A: To run shdwDrive effectively, your device should meet these specifications:
Android 12L or newer
Minimum 6 CPU cores
At least 8GB RAM
Sufficient free storage space for your chosen contribution level
Q: Which devices are supported? A: Here's a non-comprehensive list of compatible devices:
Premium/Flagship Devices
Google Pixel: 7, 7 Pro, 8, 8 Pro, 9, 9 Pro
Samsung Galaxy S: S22/+/Ultra, S23/+/Ultra, S24/+/Ultra
OnePlus: 10 Pro, 10T, 11, 11R
ASUS ROG Phone: 6, 6 Pro, 7, 7 Ultimate
Nothing Phone: 1, 2
Xiaomi POCO: X6, X6 Pro
Mid-Range Devices
Google Pixel a-series: 6a, 7a
Samsung Galaxy A-series: A53 5G, A54 5G
OnePlus Nord: N20, N200
Motorola Edge (2022, 30)
Specialty Devices (minimum storage spec)
Solana Saga
Seeker
Q: Can I run shdwDrive on older devices? A: No. While older devices might run the app, we recommend meeting the minimum specifications for optimal performance and reliability. Devices that don't meet these specs may experience:
Slower proof generation
Reduced storage efficiency
Reduced revenue potential
Potential stability issues
Eventual slashing and malice tracking in the global view manager
Q: Can I operate multiple nodes with the same wallet? A: No. Each wallet can only operate one node at a time. This means:
One wallet can only stake to one node
If you want to run multiple nodes, you'll need separate wallets for each
When switching devices, you must fully deactivate your current node before setting up the new one with the same wallet. Unstake before deactivating means waiting one Solana epoch before being able to withdraw.
When changing your storage allocation (therefore stake allocation), you must fully unstake, wait the cooldown of one Solana epoch, and restake up to your desired allocation.
Q: What if I want to switch my node to a different device? A: To switch devices:
First deactivate and unstake on your current device
Wait the cooldown and complete the withdrawal process
Set up the new device using the same wallet, remembering only one wallet per one node
Stake and activate your new node
Remember: Never try to run nodes on multiple devices with the same wallet as this can cause conflicts and operational issues at this time.
Q: How much storage can I contribute? A: Storage contribution levels are based on 1 SHDW per ~51.2MB. Choose based on your device's available storage, with buffer for critical system, and desired SHDW stake level.
Q: Why doesn't my device's full storage capacity show as available? A: Available storage is affected by several factors:
System files and OS requirements consume a portion of your total storage
Existing apps and data reduce available space
Android reserves space for system operations and updates
The app maintains a safety buffer to ensure stable operation
For example, a 512GB device might show only 400GB as available because:
Android OS and pre-installed apps use ~50GB
System reserves ~30GB for updates and cache
Your personal apps and data use a portion
A safety margin is maintained for optimal performance
The app shows only safely usable storage capacity to ensure reliable node operations.
Q: Are external storage options such as Micro SD cards supported? A: Not at this time. While we have tested this feature and confirmed it to work, there is more work needed to ensure the shdwDrive runtime plays nice with how Android OS manages peripheral user storage.
Q: How do I pass verification to join the network? A: Use one of our many Join Ticket:
Ticket1:
XHGA5LRCCKLCCAW3KCDU44SHKJEQFXZEYYKGS43BJRLSQV2ALH2QCAAB6X6SVVUSV32H6KQ4M7BNEMGCR4XTSVZTE4GBTEJQUZSJ7UTGKEJQBOE2MJ6AIAFQ5IAQDMXKAEBLJ2QBAO3OUAI=
Ticket2:
XHGA5LRCCKLCCAW3KCDU44SHKJEQFXZEYYKGS43BJRLSQV2ALH2QCAABEW6OOCQO7EMA7JKC6PQSK274ZBATDSWD2V7LJ6EA5BBCVN6HTF3QAL5ZVNGQCAEU5MAQ====
Ticket3:
XHGA5LRCCKLCCAW3KCDU44SHKJEQFXZEYYKGS43BJRLSQV2ALH2QCAABSU42N6PY6HFGSVBHDU7AOGSDUBY3KBCM3RQGNMVU7VNJWTF5TGFQAL5ZVNGQCAE65MAQ====
Ticket4:
XHGA5LRCCKLCCAW3KCDU44SHKJEQFXZEYYKGS43BJRLSQV2ALH2QCAABMI3Z7RMH4XD72UMXHBTRXXV2GGWJP75WYIFSBB5WSLZVBJADC5JAAL5ZVNGQCAFI5MAQ====
Ticket5:
XHGA5LRCCKLCCAW3KCDU44SHKJEQFXZEYYKGS43BJRLSQV2ALH2QCAABE34WB2TIQREIRXJRM6YUS2VS26BNDCO57HR3TBREHWNIK4O77MHAAL5ZVNGQCAFS5MAQ====
Ticket6:
XHGA5LRCCKLCCAW3KCDU44SHKJEQFXZEYYKGS43BJRLSQV2ALH2QCAAB7VYWWCU2O7S4QRMICVTDZNSUHV5B75F6XYHND4QKKSVTELF4F3FAAL5ZVNGQCAF45MAQ====
Ticket 7:
XHGA5LRCCKLCCAW3KCDU44SHKJEQFXZEYYKGS43BJRLSQV2ALH2QCAABIKUOVHCXUM45VSK6URD5KE3JJXAIQNSR65I5YC552CBRRAG6NMGQAL5ZVNGQCAGG5MAQ====
Q: How do I manage my node? A: The Operator section of the app provides:
Node On/Off toggle
Current storage contribution level
Network connection status
Monitor logs
Maintain good WiFi connection
Q: What does the operator interface show me? A: The operator interface displays:
Current Status:
Storage Level: Your contributed storage amount
SHDW Collateral: Current network security deposit
Node Controls:
"Deactivate & Release Collateral" button: Stops node operation
"Withdraw" button: Retrieves your SHDW collateral after cooldown
Node On/Off toggle: Controls active participation
Configuration:
Gossip Ticket: Network access credential
RPC Endpoint: Network connection point
Node Information:
Node ID: Your unique identifier
Backup Key: Recovery information
Q: What happens when I toggle my node on? A: When you toggle your node on:
The app verifies your WiFi connection
Connects to the network using your gossip ticket
Begins participating in network operations
Starts monitoring for storage requests Note: A stable WiFi connection is required to start your node
Q: What should I check before starting my node? A: Before toggling your node on, ensure:
You have a stable WiFi connection
Your gossip ticket is properly entered
The RPC is pre-filled correctly with an endpoint
Your device is charged or plugged in
You have sufficient free storage space
Q: Can I run my node while using mobile data? A: Not at this time, but soon. Your node requires a WiFi connection to:
Maintain stable network connections
Ensure efficient data transfer
Reduce mobile data usage
Provide consistent network participation The node will automatically stop if WiFi connection is lost.
Q: What is a Gossip Ticket? A: A Gossip Ticket is your node's access credential for joining the network. It contains necessary information for establishing secure connections with other nodes.
Q: What are the storage fragments I see in my Downloads folder? A: These are secure storage units that your device uses to participate in the network. Each fragment (fragment.000, fragment.001, etc.) contains portions of the distributed storage system. Don't delete these manually - the app manages them automatically.
Q: What's the process for changing my storage/stake level? A: Currently, changing levels requires:
Deactivating your current node
Unstaking your SHDW tokens
Completing the withdrawal process
Reactivating with your new desired storage level Direct storage updates are not supported - you must go through the full deactivation process.
Q: Are there staking time requirements? A: Yes, there are important timing considerations:
Initial stake requires a warmup period (one Solana epoch)
Unstaking requires a cooldown period (one Solana epoch)
You cannot deactivate/unstake during the warmup period
Revenue can be claimed once per 24-hour period
Withdrawal is only available after the cooldown period completes
Q: What if I want to switch to a different device? A: When switching devices:
Wait for your initial stake warmup period to complete
Deactivate and unstake on your current device
Wait for the cooldown period to complete
Withdraw your SHDW tokens
Set up the new device with the same wallet
Stake and activate your new node
Q: How do I access and share node logs? A: Logs can be accessed through:
The "View Logs" section in the operator dashboard
Use "Copy Logs" to copy to clipboard
Use "Send Logs" to share with support
Q: What are the key log message types?
Network Status Messages
Global View Updates:
├─ 🌐 Global Consensus View
├─ peers: [number]
└─ Network Messages: [number]
Shows network-wide state synchronization.
Event Processing:
├─ Events Received
└─ Network Messages: [number]
Indicates successful message processing.
Network Maintenance:
├─ ✂️ Cleanup
└─ Network Msgs: [number]
Shows network optimization activities.
Peer Management Messages
Join Operations:
├─ 🌐 Join
└─ Proof: [true|false]
Indicates new peer verification.
Synchronization:
├─ 🔄 Recieved Peer Sync
└─ Network Messages: [number]
Shows peer data synchronization.
Network Updates:
├─ ⚙️ Network Update
├─ Active: [number]
└─ Pending: [number]
Displays connection status changes.
Q: What are Storage Proof logs? A: Storage Proof logs demonstrate that your node is actively storing and validating data as part of the decentralized network. These logs confirm that your node is fulfilling its role in maintaining data integrity. A typical storage proof log entry may look like this:
└─ CID: Hash("ab3f875881b3c9c04e3ee73f9a7fb1afac4466ba246e6da2d664cde23a4ef7d8")
├─ Operation: Generate
[INFO] shdw_dht::dht_gossip: Storage Proof
Each entry shows:
A CID (Content Identifier) that uniquely identifies the data chunk.
The Operation performed (e.g., Generate).
A log message indicating that a Storage Proof has been processed.
Note: Not every node will necessarily display storage proof logs. Whether or not these logs appear depends on the node’s active/passive view and the network topology—that is, whether your node is selected to participate in the storage proof process as part of the active set.
Q: What indicates healthy node operation?
Normal Operation Patterns
Regular Network Activity:
Regular "Events Received" messages
Incrementing "Network Messages" count
Periodic "Global Consensus View" updates
Healthy Connection Status:
Stable or growing active edges
Low pending connection counts
Regular shuffle cycles
Successful peer syncs
Network Participation:
Regular cleanup operations
Periodic shuffle cycles
Consistent message processing
Stable peer count in consensus view
Warning Signs
Network Messages not incrementing
Frequent "Disconnect" messages with "unhealthy" status
High number of pending connections
No Global Consensus View updates
Frequent connection status changes
Missing periodic cleanup operations
Q: Why won't my node connect? A: Common issues and solutions:
Check WiFi connection stability
Ensure Gossip Ticket is correct
Confirm device has sufficient storage
Check for any system power restrictions
Confirm proper port forwarding (30000-60000) and that you are not behind a restrictive firewall
Q: What do I do if I see error messages? A: Common errors and fixes:
"Permission denied": Check app storage permissions
"Node disconnected": Check internet connection
"Storage allocation failed": Verify free space
Q: Why did my node automatically turn off? A: Your node may automatically stop if:
WiFi connection is lost
Device battery is critically low
Available storage drops below required level
Network connection becomes unstable
You run too low on memory
You swipe the app out of your active list, thereby hard closing it
You approve an Android system update that reboots your connections
Q: How do I diagnose connection issues using logs?
Initial Connection Problems
Network Status:
├─ 🌐 graph edges=2 pending=1
If edges are 0 or pending stays high, check:
Internet connection
Router UPnP settings
Port forwarding (30000-60000)
Firewall restrictions
Peer Connection Issues:
├─ 🌐 Peer Disconnected
├─ Active: 1
└─ Pending: 2
High pending counts indicate connection problems. Check:
Router settings
Network restrictions
Gossip ticket validity
Ongoing Operation Issues
Network Isolation Signs:
├─ Events Received
└─ Network Messages: [not incrementing]
Solutions:
Verify gossip ticket
Check network configuration
Restart node if persistent
Connection Quality Issues:
├─ 🌐 Disconnect
└─ Status: unhealthy
Indicates:
Network instability
Connection timeouts
Possible firewall issues
Additional Note on Storage Proof Logs: While storage proof logs are a key indicator that your node is actively contributing to data storage and validation, their absence does not imply a problem. Depending on your node’s current role (active versus passive) and the overall network topology, you might not see these logs—even when your node is operating normally. As long as you observe the other critical indicators of network health (such as regular event updates and stable connection metrics), your node is functioning as expected.
Q: What should I do if my node automatically stops? A: Common causes and solutions:
Connection Loss:
WiFi disconnection
Network configuration changes
Router reboots Solution: Restore stable network connection
Device Issues:
Critical battery level
Insufficient storage
Memory constraints
App forced close Solution: Address resource constraints
System Changes:
Android OS updates
Security policy changes
Power management interventions Solution: Reconfigure after system changes
Q: What should I do if my app becomes unresponsive or stuck at loading? A: If your app becomes unresponsive or stuck follow these recovery steps:
Initial Recovery Steps:
Backup your node key if possible
Preserve the shdwDrive folder in Downloads
Uninstall the app
Install the latest version
Connect using your original wallet
Post-Installation Process:
The app will automatically show your existing stake status (stored on-chain)
You MUST complete the unstake process before creating a new stake:
Click "Deactivate & Unstake"
Wait for the cooldown period
Use "Withdraw" to claim your tokens
Only then proceed with new stake creation
Important Considerations:
Never attempt to create a new stake without unstaking first
Your original stake is accessible on-chain as long as you use the same wallet
Future versions will support direct node ID keypair restoration
New stake will generate a new node ID
Recovery Timeline:
Unstaking cooldown: One Solana epoch
Withdrawal availability: After cooldown
New stake activation: After withdrawal
Warning Signs During Recovery:
If your active stake doesn't appear after reinstall
If you see multiple stake positions
If unstake option isn't available Stop all operations and contact support immediately
Note: We are working on implementing direct node ID keypair restoration functionality for smoother recovery in future updates.
Q: How does the revenue model work? A: The revenue model is based on real utility:
Users pay storage fees in USDC
Fees are distributed to operators based on contribution
Revenue sharing is tied to actual storage provision
No artificial token rewards or inflation
Read more here
Q: What is the purpose of SHDW tokens? A: SHDW tokens serve specific functions:
Act as network collateral for operators
Enable slashing for malicious behavior
Support network security
Not designed for speculative value or rewards
Q: How do I navigate the operator staking interface? A: The operator interface can be accessed through the Operator tab in the bottom navigation bar. Here you'll find:
Current Node Level and SHDW Staked amounts at the top
Deactivate & Unstake button for stopping your node
Withdraw button for accessing staked tokens after cooldown
Node On/Off toggle for controlling node operation
Node ID and Backup Key options
Gossip Ticket and RPC Endpoint configuration
Q: How do I check my current stake and storage levels? A: Your current status is displayed at the top of the Operator screen showing:
Current Node Level (storage amount)
Current SHDW Staked amount These values are automatically updated when changes occur.
Q: How do I initiate the unstaking process? A: To unstake:
Navigate to the Operator tab
Click the "Deactivate & Unstake" button
Confirm the transaction in your wallet
Wait for the cooldown period
Use the "Withdraw" button to claim your tokens
Q: What are the available storage and staking tiers? A: Storage levels are now based on the amount of SHDW you choose to stake with a ratio of 1 SHDW per ~51.2MB and a min. floor of 1,000 SHDW to qualify.
You must have sufficient SHDW tokens and available device storage to select a tier. All storage tiers require meeting the minimum device specifications outlined in the "Requirements & Setup" section.
Note: Future releases will introduce more granular and dynamic staking and storage options.
Q: What's the process for changing my storage level? A: Changing storage levels requires:
Deactivating your current node
Releasing your SHDW collateral
Completing the withdrawal process
Reactivating with new storage level and corresponding collateral
Q: What happens during deactivation? A: The deactivation process includes:
Node is stopped
Current stake enters pending withdrawal state
Storage allocation is released
Node configuration is preserved for potential reactivation
Q: What happens when I deactivate and unstake? A: The process involves several steps:
Deactivate & Unstake: Initiates the unstaking process
Withdrawal Period: Your tokens enter a pending withdrawal state
Withdraw: After the cooling period, you can withdraw your staked SHDW
Your node will be turned off but your operator account remains initialized
Q: What happens to my revenues when I unstake? A: When unstaking:
Any unclaimed revenue remain available
You can still claim revenue from your previous contribution
New revenue stop accruing once unstaked
Your operator account retains access to claim functions
Q: Are there staking time requirements? A: Yes, there are important timing considerations:
Initial stake requires a warmup period (one Solana epoch)
Unstaking requires a cooldown period (one Solana epoch)
You cannot deactivate/unstake during the warmup period
Revenue can be claimed once per 24-hour period
Withdrawal is only available after the cooldown period completes
Q: What if I want to switch to a different device? A: When switching devices:
Wait for your initial stake warmup period to complete
Deactivate and unstake on your current device
Wait for the cooldown period to complete
Withdraw your SHDW tokens
Set up the new device with the same wallet
Stake and activate your new node
Q: How do I change my storage contribution? A: To modify your contribution:
Access the "Update Storage Contribution" section
Select a new storage level
Adjust stake if required
Confirm the changes
Q: What happens to my settings when I update my storage contribution? A: When updating your storage contribution:
Your current node status is preserved
The app verifies available device storage
Stake requirements are recalculated
You'll see a confirmation before changes apply
Node may need to restart with new settings
Q: What happens if I want to stop being an operator? A: To deactivate:
Use the "Deactivate & Unstake" option
Your node will properly disconnect
Storage fragments will be cleaned up
Staked SHDW tokens will be returned
Earned revenue remain available for claim
Q: Where can I get help? A: Support resources:
In-app help buttons provide contextual guidance
View and share logs for technical support
Community forums and documentation
Official support channels
Report bugs through the app feedback system
Q: How do I report bugs or submit feedback? A: We have a dedicated system for bug reports and feedback:
Visit our Airtable form: https://airtable.com/appUQgLU7dOMvGB5J/pagZ4dkosLyEqjvBs/form
Fill out the relevant information
Include any error messages or screenshots
Describe the steps to reproduce the issue
Submit the form for our team to review
Q: What information should I include in a bug report? A: To help us resolve issues quickly, please include:
Your device model and Android version
App version number
Specific steps that led to the issue
Any error messages you received
Screenshots if applicable
Node logs if the issue is operator-related
Q: How can I check if my bug has already been reported? A: Before submitting a new bug report:
Check the FAQ for known issues and solutions
Review recent app updates for fixed issues
Look for similar issues in community discussions
If in doubt, submit a new report - we prefer duplicate reports to missing issues
Q: What happens after I submit feedback? A: After submission:
Our team reviews all feedback and bug reports
Critical issues are prioritized for immediate attention
Feature requests are evaluated for future updates
Common issues may be added to the FAQ
Major fixes are announced in app updates
Q: How do I backup my node information? A: Important backup steps:
Save your keypair backup securely
Document your node configuration
Keep recovery phrases safe
Never share private keys or sensitive data
Use the app's built-in backup features
Q: What is the "Backup Key" option in my operator dashboard? A: The Backup Key feature:
Downloads your node's keypair information
Stores it securely in your downloads folder
Should be kept safe and private
Is essential for node recovery
Should never be shared with others
Q: When should I backup my node key? A: It's recommended to backup your key:
Immediately after node activation
Before making major node changes
When updating the app
As part of regular security maintenance Never share your backup key with anyone, even if they claim to be support.
Q: Is my personal data safe when operating a node? A: Yes, the app uses encrypted storage and secure communication protocols. Your device's personal data is completely separated from the storage space you contribute to the network.
Q: What information is shared with the network? A: Only technical information necessary for network operation is shared:
Your operator public key
Storage contribution metrics
Network connection details
Node performance statistics
Requirement settings for shdwOperators
Manufacturer-Specific Settings
Samsung (S21 & Newer)
Google Pixel (6 & Newer)
OnePlus (10 & Newer)
Xiaomi / POCO (Newer Models)
OPPO / Realme (Android 12L+)
ASUS (ZenFone / ROG on 12L+)
Motorola (2022+ Models)
Nothing Phone (1, 2)
Sony Xperia (Recent Models)
Vivo (Android 12L+)
Nokia (2022+ Models)
TCL (12L+ Releases)
ZTE (12L+ Releases)
Lenovo (Recent Tablets / Phones)
Black Shark (5 Series & Newer)
Infinix / Tecno (Current 12L+ Models)
Go to Settings → Battery (or Battery saver / Battery optimization).
Locate shdwDrive in the list.
Set it to Not optimized or Unrestricted.
This ensures the system does not close the app in the background.
Settings → Apps → shdwDrive → Battery (or similar).
Toggle Allow background activity to On.
Disable any “Battery optimization” or “Restrict background data” specifically for shdwDrive.
Never manually force-stop the shdwDrive app.
Keep shdwDrive in your Recent Apps (avoid swiping it away).
If there’s an Auto-start / Auto-launch feature, turn it on for shdwDrive.
Allow it to run in the background continuously.
Exclude shdwDrive from any memory cleaners or “one-tap boost” apps.
Turn off automatic hibernation for the app.
Confirm shdwDrive is on any “Do not optimize” or “White list” for battery/memory.
If possible, keep your device plugged in while the node runs.
Avoid or disable aggressive power-saving modes that close apps at low battery.
Set your low-battery threshold to around 15% so shdwDrive won’t be shut down prematurely.
A dedicated power supply can help if you’re running the node for extended periods.
Disable Adaptive Battery specifically for shdwDrive.
Turn off “Optimize for battery life” for shdwDrive.
Remove it from any “sleeping apps” or “deep-sleep” lists.
If your device has Developer Options:
Set Background process limit to Standard or No limit.
Disable or reduce any “memory optimization” tools that might kill background processes.
Use High performance mode if available for stable connectivity.
Reduce or disable thermal throttling on gaming/performance phones if comfortable.
Ensure the device has adequate cooling to avoid forced closures.
Below are recommendations for devices running Android 12L or later. Menu names may vary by region or device.
Applies to: Galaxy S21, S22, S23, A53, A54, etc. (on Android 12L+)
Battery Settings
Settings → Battery → Background usage limits
Add shdwDrive to Unrestricted apps.
More battery settings: Disable Adaptive battery and Put unused apps to sleep.
App Management
Settings → Apps → shdwDrive → Battery → Unrestricted
Under Mobile data, enable background data.
Keep shdwDrive in memory if such an option is present.
Device Care
Settings → Device care → Battery → Turn off Adaptive power saving
Choose High performance mode if needed.
Applies to: Pixel 6, 6 Pro, 7, 7 Pro, 8, etc. (Android 12L+)
Battery Settings
Settings → Battery → Battery Saver
Turn it off or only use it manually.
Settings → Apps → shdwDrive → Battery → Unrestricted.
Background Process
Settings → Apps → Special app access → Background restrictions
Make sure shdwDrive is allowed in the background.
Memory Management
Settings → Developer options → Background process limit → Standard.
Applies to: OnePlus 10, 10T, 11, etc. (Android 12L+)
Battery Optimization
Settings → Battery → Advanced → Battery optimization → shdwDrive → Don’t optimize.
Background Process
Settings → Apps → shdwDrive → Battery → Don’t optimize
Allow background data usage.
System Settings
Settings → System settings → RAM boost → Disable if it kills background tasks.
Battery → Intelligent Control → Turn off for shdwDrive.
Applies to: Xiaomi / Redmi / POCO (2022+ devices running 12L+)
Battery & Performance
Settings → Battery & performance
Turn off Battery saver for shdwDrive or set to Unrestricted.
App Management
Settings → Apps → Manage apps → shdwDrive
Autostart → Enable
Battery saver → No restrictions
Security App (If applicable)
In Security → Battery optimization → Disable for shdwDrive.
Exclude from memory optimization as well.
Applies to: OPPO/Realme devices running Android 12L+
Battery Settings
Settings → Battery → select High performance mode or exclude from Power saver.
App Management
Settings → Apps → shdwDrive
Battery → Allow background activity
Startup/Auto-launch → Enable
System Settings
Settings → Additional Settings → Background app management → Allow shdwDrive.
Applies to: ZenFone 8/9, ROG Phone 5/6/7 on 12L+
Power Management
Settings → Battery → PowerMaster
Disable optimization for shdwDrive.
Auto-start manager → Allow shdwDrive.
Mobile Manager
Mobile Manager → PowerMaster → High performance
Keep shdwDrive unrestricted in background.
App Specific
Settings → Apps → shdwDrive → allow auto-start and background activity.
Applies to: 2022+ models running Android 12L or later
Battery Settings
Settings → Battery → Turn off Adaptive Battery for shdwDrive or pick Unrestricted.
App Management
Settings → Apps → shdwDrive → Battery → Unrestricted
Mobile data & Wi-Fi → Allow background data.
Performance
Settings → System → (Developer options or Gestures) → ensure no forced app closures.
Applies to: Nothing Phone (1) & (2)
Battery Settings
Settings → Battery → Battery optimization → shdwDrive → Don’t optimize.
App Management
Settings → Apps → shdwDrive
Battery usage → Unrestricted
Background process → Allow
System Settings
Settings → System → Developer options → set Background process limit to Standard or no limit.
Applies to: Xperia devices launched or updated to 12L+
Battery Settings
Settings → Battery → turn off Adaptive Battery for shdwDrive.
Disable or bypass STAMINA mode if it kills the node.
App Management
Settings → Apps → shdwDrive → Advanced → Battery optimization → Don’t optimize.
Allow background data usage.
Power Management
Check that Adaptive battery is off or not affecting shdwDrive.
Let the app run freely in background settings.
Applies to: Vivo (OriginOS/Funtouch OS 12L+)
Battery Settings
Settings → Battery → Background power consumption → Allow for shdwDrive.
Exclude from power saving mode.
iManager Settings
iManager → App Manager → Auto-start → Enable for shdwDrive.
Background running → Allow.
App Management
Settings → Apps → shdwDrive → Background wake up → Allow
Background running permission → Allow
Applies to: Nokia models from 2022 onward running 12L+
Power Settings
Settings → Battery → Battery optimization → Don’t optimize for shdwDrive.
Disable Adaptive Battery for the app if possible.
App Management
Settings → Apps & notifications → shdwDrive → Advanced → Background restrictions → Off
Battery → Unrestricted
Background Activity
Settings → System → Developer options → Background process limit → Standard
Applies to: TCL devices on Android 12L+ (2022 releases and newer)
Battery Management
Settings → Battery & Performance → App power saving mode → Off for shdwDrive
Smart Manager → auto-launch → Enable for shdwDrive
App Settings
Settings → Apps → shdwDrive → Battery → Don’t restrict
Background process → Allow
System Optimization
Settings → Privacy → Smart Manager → Battery optimization → Disable for shdwDrive
Applies to: ZTE devices on Android 12L+ (e.g., Axon series)
Power Settings
Settings → Battery → power saving mode → exclude shdwDrive
App power saving → Off for shdwDrive
App Management
Settings → Apps → shdwDrive → Battery → Unrestricted
Auto-start → Enable
System Settings
Settings → Power Manager → Battery optimization → Don’t optimize for shdwDrive
Applies to: Recent Lenovo tablets / phones with 12L or later
Battery Settings
Settings → Battery → Allow background app management for shdwDrive
Exclude from power saving modes.
Security Settings
Security Center → App management → Auto-start → enable for shdwDrive
Background apps → allow
App Specific
Settings → Apps → shdwDrive → Battery → Don’t optimize
Background mobile data → Allow
Applies to: Black Shark 5 Series and later (Android 12L+)
Game Dock Settings
Game Dock → Performance → CPU/GPU → Performance mode
Background process → Allow shdwDrive
Battery Settings
Settings → Battery → App battery saver → disable for shdwDrive
Performance mode → On when plugged in
System Settings
Settings → Additional settings → Developer options → Background process → Standard
Applies to: Infinix/Tecno models running 12L+
Power Management
Settings → Battery → Power Management → Off for shdwDrive
Background apps → Allow
Phone Master
Phone Master → Auto-start → Enable shdwDrive
Background running → Allow
App Management
Settings → Apps → shdwDrive → Power usage → Don’t restrict
Auto-launch → Enable
Check Settings After Updates: System updates can revert your battery or background settings. Review them after each update.
Stay Plugged In: A stable power source ensures you won’t lose node connectivity when battery runs low.
Monitor Node Activity: If the node goes offline, revisit settings to confirm none have changed.
By following these instructions on your Android 12L+ device, you’ll help keep shdwDrive running reliably in the background for maximum node uptime. Good luck and happy node-running!
shdwDrive v1.5 is no longer maintained. Please migrate to v2 and consult the new developer guide for instructions.
shdwDrive v1.5 is no longer maintained. Please migrate to v2 and consult the new developer guide for instructions.
POST
https://shadow-storage.genesysgo.net
Creates a new storage account
Request content type: application/json
transaction*
Serialized create storage account transaction that's partially signed by the storage account owner
{
"shdw_bucket": String,
"transaction_signature": String
}
POST
https://shadow-storage.genesysgo.net
Gets on-chain and ShdwDrive Network data about a storage account
Request content type: application/json
storage_account*
String
Publickey of the storage account you want to get information for
{
storage_account: PublicKey,
reserved_bytes: Number,
current_usage: Number,
immutable: Boolean,
to_be_deleted: Boolean,
delete_request_epoch: Number,
owner1: PublicKey,
owner2: PublicKey,
accountCoutnerSeed: Number,
creation_time: Number,
creation_epoch: Number,
last_fee_epoch: Number,
identifier: String
version: "V1"
}
{
storage_account: PublicKey,
reserved_bytes: Number,
current_usage: Number,
immutable: Boolean,
to_be_deleted: Boolean,
delete_request_epoch: Number,
owner1: PublicKey,
accountCoutnerSeed: Number,
creation_time: Number,
creation_epoch: Number,
last_fee_epoch: Number,
identifier: String,
version: "V2"
}json
POST
https://shadow-storage.genesysgo.net
Uploads a single file or multiple files at once Request content type: multipart/form-data Example Implementation
Parameters (FormData fields)
file*
The file you want to upload. You may add up to 5 files each with a field name of
file
.
message*
String
Base58 message signature.
signer*
String
Publickey of the signer of the message signature and owner of the storage account
storage_account*
String
Key of the storage account you want to upload to
{
"finalized_locations": [String],
"message": String
"upload_errors": [{file: String, storage_account: String, error: String}] or [] if no errors
}
POST
https://shadow-storage.genesysgo.net
Edits an existing file
Request content type: multipart/form-data
Parameters (FormData fields)
file*
String
The file you want to upload. You may add up to 5 files each with a field name of
file
.
message*
String
Base58 message signature.
signer*
String
Publickey of the signer of the message signature and owner of the storage account
storage_account*
String
Key of the storage account you want to upload to
url*
String
Url of the original file you want to edit. Example:
https://shdw-drive.genesysgo.net/<storage-account>/<file-name>
{
"finalized_location": String,
"error": String or not provided if no error
}
POST
https://shadow-storage.genesysgo.net
Get a list of all files associated with a storage account
Request content type: application/json
storageAccount
String
String version of the storage account PublicKey that you want to get a list of files for
{
"keys": [String]
}
POST
https://shadow-storage.genesysgo.net
Get a list of all files and their size associated with a storage account
Request content type: application/json
storageAccount*
String
String version of the storage account PublicKey that you want to get a list of files for
{
"files": [{"file_name": String, size: Number}]
}
POST
https://shadow-storage.genesysgo.net
Get information about an object
Request content type: application/json
location*
String
URL of the file you want to get information for
JSON object of the file's metadata in the ShdwDrive Network or an error
POST
https://shadow-storage.genesysgo.net
Deletes a file from a given Storage Account
Request content type: application/json
message
String
Base58 message signature.
signer
String
Publickey of the signer of the message signature and owner of the storage account
location
String
URL of the file you want to delete
{
"message": String,
"error": String or not passed if no error
}
POST
https://shadow-storage.genesysgo.net
Adds storage
Request content type: application/json
transaction *
String
Serialized add storage transaction that is partially signed by the ShdwDrive network
{
message: String,
transaction_signature: String,
error: String or not provided if no error
}
POST
https://shadow-storage.genesysgo.net
Reduces storage
Request content type: application/json
transaction *
String
Serialized reduce storage transaction that is partially signed by the ShdwDrive network
{
message: String,
transaction_signature: String,
error: String or not provided if no error
}
POST
https://shadow-storage.genesysgo.net
Makes file immutable
Request content type: application/json
transaction
String
Serialized make immutable transaction that is partially signed by the ShdwDrive network
{
message: String,
transaction_signature: String,
error: String or not provided if no error
}
This example demonstrates how to securely upload files to the ShdwDrive using the provided API. It includes the process of hashing file names, creating a signed message, and sending the files along with the necessary information to the ShdwDrive endpoint.
import bs58 from 'bs58'
import nacl from 'tweetnacl'
import crypto from 'crypto'
// `files` is an array of each file passed in.
const allFileNames = files.map(file => file.fileName)
const hashSum = crypto.createHash("sha256")
// `allFileNames.toString()` creates a comma-separated list of all the file names.
const hashedFileNames = hashSum.update(allFileNames.toString())
const fileNamesHashed = hashSum.digest("hex")
// `storageAccount` is the string representation of a storage account pubkey
let msg = `Shadow Drive Signed Message:\nStorage Account: ${storageAccount}\nUpload files with hash: ${fileNamesHashed}`;
const fd = new FormData();
// `files` is an array of each file passed in
for (let j = 0; j < files.length; j++) {
fd.append("file", files[j].data, {
contentType: files[j].contentType as string,
filename: files[j].fileName,
});
}
// Expect the final message string to look something like this if you were to output it
// ShdwDrive Signed Message:
// Storage Acount: ABC123
// Upload files with hash: hash1
// If the message is not formatted like above exactly, it will fail message signature verification
// on the ShdwDrive Network side.
const encodedMessage = new TextEncoder().encode(message);
// Uses https://github.com/dchest/tweetnacl-js to sign the message. If it's not signed in the same manor,
// the message will fail signature verification on the ShdwNetwork side.
// This will return a base58 byte array of the signature.
const signedMessage = nacl.sign.detached(encodedMessage, keypair.secretKey);
// Convert the byte array to a bs58-encoded string
const signature = bs58.encode(signedMessage)
fd.append("message", signature);
fd.append("signer", keypair.publicKey.toString());
fd.append("storage_account", storageAccount.toString());
fd.append("fileNames", allFileNames.toString());
const request = await fetch(`${SHDW_DRIVE_ENDPOINT}/upload`, {
method: "POST",
body: fd,
});
In this example, we demonstrate how to edit a file in ShdwDrive using the API and message signature verification. The code imports necessary libraries, constructs a message to be signed, encodes and signs the message, and sends an API request to edit the file on ShdwDrive.
import bs58 from 'bs58'
import nacl from 'tweetnacl'
// `storageAccount` is the string representation of a storage account pubkey
// `fileName` is the name of the file to be edited
// `sha256Hash` is the sha256 hash of the new file's contents
const message = `ShdwDrive Signed Message:\n StorageAccount: ${storageAccount}\nFile to edit: ${fileName}\nNew file hash: ${sha256Hash}`
// Expect the final message string to look something like this if you were to output it
// ShdwDrive Signed Message:
// Storage Acount: ABC123
// File to delete: https://shadow-drive.genesysgo.net/ABC123/file.png
// If the message is not formatted like above exactly, it will fail message signature verification
// on the ShdwDrive Network side.
const encodedMessage = new TextEncoder().encode(message);
// Uses https://github.com/dchest/tweetnacl-js to sign the message. If it's not signed in the same manor,
// the message will fail signature verification on the Shdw Network side.
// This will return a base58 byte array of the signature.
const signedMessage = nacl.sign.detached(encodedMessage, keypair.secretKey);
// Convert the byte array to a bs58-encoded string
const signature = bs58.encode(signedMessage)
const fd = new FormData();
fd.append("file", fileData, {
contentType: fileContentType as string,
filename: fileName,
});
fd.append("signer", keypair.publicKey.toString())
fd.append("message", signature)
fd.append("storage_account", storageAccount)
const uploadResponse = await fetch(`${SHDW_DRIVE_ENDPOINT}/edit`, {
method: "POST",
body: fd,
});
In this example, we demonstrate how to delete a file from the ShdwDrive using a signed message and the ShdwDrive API. The code first constructs a message containing the storage account and the file URL to be deleted. It then encodes and signs the message using the tweetnacl library. The signed message is then converted to a bs58-encoded string. Finally, a POST request is sent to the ShdwDrive API endpoint to delete the file.
import bs58 from 'bs58'
import nacl from 'tweetnacl'
// `storageAccount` is the string representation of a storage account pubkey
// `url` is the link to the ShdwDrive file, just like the previous implementation needed the url input
const message = `ShdwDrive Signed Message:\nStorageAccount: ${storageAccount}\nFile to delete: ${url}`
// Expect the final message string to look something like this if you were to output it
// ShdwDrive Signed Message:
// Storage Acount: ABC123
// File to delete: https://shadow-drive.genesysgo.net/ABC123/file.png
// If the message is not formatted like above exactly, it will fail message signature verification
// on the ShdwDrive Network side.
const encodedMessage = new TextEncoder().encode(message);
// Uses https://github.com/dchest/tweetnacl-js to sign the message. If it's not signed in the same manor,
// the message will fail signature verification on the Shdw Network side.
// This will return a base58 byte array of the signature.
const signedMessage = nacl.sign.detached(encodedMessage, keypair.secretKey);
// Convert the byte array to a bs58-encoded string
const signature = bs58.encode(signedMessage)
const deleteRequestBody = {
signer: keypair.publicKey.toString(),
message: signature,
location: options.url
}
const deleteRequest = await fetch(`${SHDW_DRIVE_ENDPOINT}/delete-file`, {
method: "POST",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify(deleteRequestBody)
})
shdwDrive v1.5 is no longer maintained. Please migrate to v2 and consult the new developer guide for instructions.
The CLI is the easiest way to interact with shdwDrive. You can use your favorite shell scripting language, or just type the commands one at a time. For test driving shdwDrive, this is the best way to get started.
Prerequisites: Install NodeJS LTS 16.17.1 on any OS.
Then run the following command
npm install -g @shadow-drive/cli
In order to interact with shdwDrive, we're going to need a Solana wallet and CLI to interact with the Solana blockchain.
NOTE: The shdwDrive CLI uses it's own RPC configuration. It does not use your Solana environment configuration.
Check HERE for the latest version.
sh -c "$(curl -sSfL https://release.solana.com/v1.14.3/install)"
Upon install, follow that up immediately with:
export PATH="/home/sol/.local/share/solana/install/active_release/bin:$PATH"
We need to have a keypair in .json format to use the shdwDrive CLI. This is going to be the wallet that owns the storage account. If you want, you can convert your browser wallet into a .json file by exporting the private keys. Solflare by default exports it in a .json format (it looks like a standard array of integers, [1,2,3,4...]. Phantom, however, needs some help and we have just the tool to do that.
If you want to create a new wallet, just use
solana-keygen new -o ~/shdw-keypair.json
You will see it write a new keypair file and it was display the pubkey
which is your wallet address.
You'll need to send a small amount of SOL and SHDW to that wallet address to proceed! The SOL is used to pay for transaction fees, the SHDW is used to create (and expand) the storage account!
shdwDrive CLI comes with integrated help. All shdwDrive commands begin with shdw-drive
.
shdw-drive help
The above command will yield the following output
You can get further help on each of these commands by typing the full command, followed by the --help
option.
shdw-drive create-storage-account --help
This is one of the few commands where you will need SHDW. Before the command executes, it will prompt you as to how much SHDW will be required to reserve the storage account. There are three required options:
-kp, --keypair
Path to wallet that will create the storage account
-n, --name
What you want your storage account to be named. (Does not have to be unique)
-s, --size
Amount of storage you are requesting to create. This should be in a string like '1KB', '1MB', '1GB'. Only KB, MB, and GB storage delineations are supported.
Example:
shdw-drive create-storage-account -kp ~/shdw-keypair.json -n "pony storage drive" -s 1GB
Options for this command:
-kp, --keypair
Path to wallet that will upload the file
-f, --file
File path. Current file size limit is 1GB through the CLI.
If you have multiple storage accounts it will present you with a list of owned storage accounts to choose from. You can optionally provide your storage account address with:
-s, --storage-account
Storage account to upload file to.
--rpc <your-RPC-endpoint>
RPC endpoint to pass custom endpoint. This can resolve 410 errors if you are using methods not available from the default free public endpoint.
Example 1:
shdw-drive upload-file -kp ~/shdw-keypair.json -f ~/AccountHolders.csv
Example 2 with RPC:
shdw-drive upload-file -kp ~/shdw-keypair.json -f ~/AccountHolders.csv --rpc <https://some-solana-api.com>
A more realistic use case is to upload an entire directory of, say, NFT images and metadata. It's basically the same thing, except we point the command to a directory.
Options:
-kp, --keypair
Path to wallet that will upload the files
-d, --directory
Path to folder of files you want to upload.
-s, --storage-account
Storage account to upload file to.
-c, --concurrent
Number of concurrent batch uploads. (default: "3")
--rpc <your-RPC-endpoint>
RPC endpoint to pass custom endpoint. This can resolve 410 errors if you are using methods not available from the default free public endpoint.
Example 1:
shdw-drive upload-multiple-files -kp ~/shdw-keypair.json -d ~/ponyNFT/assets/
Example 2 with RPC:
shdw-drive upload-multiple-files -kp ~/shdw-keypair.json -d ~/ponyNFT/assets/ --rpc <https://some-solana-api.com>
This command is used to replace an existing file that has the exact same name. If you attempt to upload this file using edit-file
and an existing file with the same name is not already there, the request will fail.
There are three requirements for this command:
-kp, --keypair
Path to wallet that will upload the file
-f, --file
File path. Current file size limit is 1GB through the CLI. File must be named the same as the one you originally uploaded
-u, --url
ShdwDrive URL of the file you are requesting to delete
Example:
shdw-drive edit-file --keypair ~/shdw-keypair.json --file ~/ponyNFT/01.json --url https://shdw-drive.genesysgo.net/abc123def456ghi789/0.json
This is straightforward and it's important to note once it's deleted, it's gone for good.
There are two requirements and there aren't any options outside of the standard ones:
-kp, --keypair
Path to the keypair file for the wallet that owns the storage account and file
-u, --url
ShdwDrive URL of the file you are requesting to delete
Example:
shdw-drive delete-file --keypair ~/shdw-keypair.json --url https://shdw-drive.genesysgo.net/abc123def456ghi789/0.json
You can expand the storage size of a storage account. This command consumes SHDW tokens.
There are only two requirements for this call
-kp, --keypair
Path to wallet that will upload the files
-s, --size
Amount of storage you are requesting to add to your storage account. Should be in a string like '1KB', '1MB', '1GB'. Only KB, MB, and GB storage delineations are supported currently
If you have more than one account, you'll get to pick which storage account you want to add storage to.
Example:
shdw-drive add-storage -kp ~/shdw-keypair.json -s 100MB
You can reduce your storage account and reclaim your unused SHDW tokens. This is a two part operation where you first reduce your account size, and then request your SHDW tokens. First, let's reduce the storage account size.
There are two requirements
-kp, --keypair
Path to wallet that will upload the files
-s, --size
Amount of storage you are requesting to remove from your storage account. Should be in a string like '1KB', '1MB', '1GB'. Only KB, MB, and GB storage delineations are supported currently
Example:
shdw-drive reduce-storage -kp ~/shdw-keypair.json -s 500MB
Since you reduced the amount of storage being used in the previous step, you are now free to claim your unused SHDW tokens. The only requirement here is a keypair.
Example:**
shdw-drive claim-stake -kp ~/shdw-keypair.json
You can entirely remove a storage account from ShdwDrive. Upon completion, your SHDW tokens will be returned to the wallet.
NOTE: You have a grace period upon deletion that lasts until the end of the current Solana epoch. Go HERE to see how much time is remaining in the current Solana epoch to know how much grace period you will get.
All you need here is a keypair, and it will prompt you for the specific storage account to delete.
Example:
shdw-drive delete-storage-account ~/shdw-keypair.json
Assuming the epoch is still active, you can undelete your storage account. You only need a keypair. You will be prompted to select a storage account when running the command. This removes the deletion request.
shdw-drive undelete-storage-account -kp ~/shdw-keypair.json
One of the most unique and useful features of ShdwDrive is that you can make your storage truly permanent. With immutable storage, no file that was uploaded to the account can ever be deleted or edited. They are solidified and permanent, as is the storage account itself. You can still continue to upload files to an immutable account, as well as add storage to an immutable account.
The only requirement is a keypair. You will be prompted to select a storage account when running the command.
Example:
shdw-drive make-storage-account-immutable -kp ~/shdw-keypair.json
Create an account on which to store data. Storage accounts can be globally, irreversibly marked immutable for a one-time fee. Otherwise, files can be added or deleted from them, and space rented indefinitely.
Parameters:
--name
String
--size
Byte
Example:
shadow-drive-cli create-storage-account --name example_account --size 10MB
Queues a storage account for deletion. While the request is still enqueued and not yet carried out, a cancellation can be made (see cancel-delete-storage-account subcommand).
Parameters:
--storage-account
Pubkey
Example:
shadow-drive-cli delete-storage-account --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Example:
shadow-drive-cli delete-storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Cancels the deletion of a storage account enqueued for deletion.
Parameters:
--storage-account
Pubkey
Example:
shadow-drive-cli cancel-delete-storage-account --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Example:
shadow-drive-cli cancel-delete-storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Redeem tokens afforded to a storage account after reducing storage capacity.
Parameters:
--storage-account
Pubkey
Example:
shadow-drive-cli claim-stake --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Example:
shadow-drive-cli claim-stake FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Increase the capacity of a storage account.
Parameters:
--storage-account
Pubkey
--size
Byte (accepts KB, MB, GB)
Example:
shadow-drive-cli add-storage --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB --size 10MB
Example:
shadow-drive-cli add-storage FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB 10MB
Increase the immutable storage capacity of a storage account.
Parameters:
--storage-account
Pubkey
--size
Byte (accepts KB, MB, GB)
Example:
shadow-drive-cli add-immutable-storage --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB --size 10MB
Example:
shadow-drive-cli add-immutable-storage FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB 10MB
Reduce the capacity of a storage account.
Parameters:
--storage-account
Pubkey
--size
Byte (accepts KB, MB, GB)
Example:
shadow-drive-cli reduce-storage --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB --size 10MB
Example:
shadow-drive-cli reduce-storage FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB 10MB
Make a storage account immutable. This is irreversible.
Parameters:
--storage-account
Pubkey
Example:
shadow-drive-cli make-storage-immutable --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Example:
shadow-drive-cli make-storage-immutable FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Fetch the metadata pertaining to a storage account.
Parameters:
--storage-account
Pubkey
Example:
shadow-drive-cli get-storage-account --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Example:
shadow-drive-cli get-storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Fetch a list of storage accounts owned by a particular pubkey. If no owner is provided, the configured signer is used.
Parameters:
--owner
Option<Pubkey>
Example:
shadow-drive-cli get-storage-accounts --owner FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Example:
shadow-drive-cli get-storage-accounts
List all the files in a storage account.
Parameters:
--storage-account
Pubkey
Example:
shadow-drive-cli list-files --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Example:
shadow-drive-cli list-files FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB
Get a file, assume it's text, and print it.
Parameters:
--storage-account
Pubkey
--filename
Example:
shadow-drive-cli get-text --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB --filename example.txt
Example:
shadow-drive-cli get-text FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB example.txt
Get basic file object data from a storage account file.
Parameters:
--storage-account
Pubkey
--file
String
Example:
shadow-drive-cli get-object-data --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB --file example.txt
Example:
shadow-drive-cli get-object-data FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB example.txt
Delete a file from a storage account.
Parameters:
--storage-account
Pubkey
--filename
String
Example:
shadow-drive-cli delete-file --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB --filename example.txt
Example:
shadow-drive-cli delete-file FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB example.txt
Edit a file in a storage account.
Parameters:
--storage-account
Pubkey
--path
PathBuf
Example:
shadow-drive-cli edit-file --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB --path /path/to/new/file.txt
Example:
shadow-drive-cli edit-file FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB /path/to/new/file.txt
Upload one or more files to a storage account.
Parameters:
--batch-size
usize (default: value of FILE_UPLOAD_BATCH_SIZE)
--storage-account
Pubkey
--files
Vec<PathBuf>
Example:
shadow-drive-cli store-files --batch-size 100 --storage-account FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB file1.txt file2.txt
Example:
shadow-drive-cli store-files FKDU64ffTrQq3E1sZsNknefrvY8WkKzCpRyRfptTnyvB file1.txt file2.txt
A list of terms that relate to all things SHDW!
Asynchronous Byzantine Fault Tolerance (aBFT): aBFT is a consensus algorithm used to provide fault tolerance for distributed systems within asynchronous networks. It is based on the Byzantine Fault Tolerance (BFT) algorithm and guarantees that, when properly configured and given tolerable network conditions, its consensus will eventually be reached — even when faults such as computer crashes, perfect cyber-attacks, or malicious manipulation have happened. aBFT is designed to be tolerant of malicious or faulty actors even when those actors make up more than one-third of the nodes inside the network. Blockchain: A peer to peer, decentralized, immutable digital ledger used to securely and efficiently store data that is permanently linked and secured using cryptography. Byzantine Fault Tolerance: A fault-tolerance system in distributed computing systems where multiple replicas are used to reach consensus, such that any faulty nodes can be tolerated and the consensus of the system can be maintained despite errors or malicious attacks. Consensus Algorithm: A consistency algorithm is a method of achieving agreement on a single data value across distributed systems. It is typically used in blockchain networks to arrive at a common data state and ensure consensus across network participants. It is used to achieve fault-tolerance in distributed systems and allows for the distributed system to remain operational even if some of its components fail. Consensus Protocols: A consensus protocol is an algorithm used to achieve agreement among distributed systems on the state of data within the system. Consensus protocols are essential in distributed systems to ensure there is no disagreement between the nodes on the data state, so that no node has a different view of the data. The protocols typically employ some form of voting, such as majority voting, or methods like proof of work, to achieve the necessary agreement on the data state. Cryptographic Hashing: Cryptographic hashing is a process used to convert data into a fixed-length string of characters, or "hashes", in order to protect the data and verify its authenticity through an encrypted code. The hash cannot be reversed to the original data and is used extensively time to ensure data integrity. Cryptographic hashes can be used to verify data integrity, authenticate data sources, and prevent tampering. Cryptography: The practice and study of techniques used to secure communication, data, and systems by transforming them into an unreadable format. Cryptography is an important component of cybersecurity, providing data protection and confidentiality. Data Caching: Data caching is a software engineering technique that stores frequently accessed or computational data in a cache in order to quickly access that data when it is needed. Data caching works by temporarily storing data to serve it quickly when requested. This can significantly improve the performance of applications, which reduces the amount of time spent serving requests. Data Encryption: The process of encoding data using encryption algorithms in order to keep the content secure and inaccessible to user without a decryption key. Data encryption makes it difficult for unauthorized users to read confidential data. Data Integrity: A quality of digital data that has been maintained over its entire life cycle, and is consistent and correct. The term is ensured through specific protocols such as data validation and error detection and correction. Data Partitioning: The process of splitting large data or throughputs into smaller, more manageable units. This process allows for more accuracy in data processing and faster results, as well as the ability to easily store and access the data. Data Sharding: Data Sharding is a partitioning technique which divides large datasets into smaller subsets which are stored across multiple nodes. It is used to improve scalability, availability and fault tolerance in distributed databases. When combined with replication, Data Sharding can improve the speed of queries by allowing them to run in parallel. Decentralized Storage Network: A Decentralized Storage Network is a type of distributed system which utilizes distributed nodes to store and serve data securely, without relying on a single point of failure. It is an advanced form of data storage and sharing which provides scalability and redundancy, allowing for more secure and reliable access than that offered by conventional centralized systems. Delegated Proof-of-Stake (DPoS): Delegated Proof-of-Stake (DPoS) is a consensus mechanism used in some blockchain networks. It works by allowing token holders to vote for a “delegate” of their choice to validate transactions and produce new blocks on their behalf. Delegates are rewarded for their work with a percentage of all transaction fees and new block rewards. DPoS networks are usually faster and more scalable than traditional PoW and PoS networks. Digital Signatures: A digital signature is a mathematical scheme for demonstrating the authenticity of digital messages of documents. It is used for authenticating the source of messages or documents and for providing the integrity of messages or documents. A valid digital signature gives a recipient reason to believe that the message or document was created or approved by a known sender, and has not been altered in transit. Directed Acyclic Graphs: A Directed Acyclic Graph (DAG) is a type of graph with directed edges that do not form a cycle. It consists of vertices (or nodes) and edges that connect the nodes. The direction of the edges between the nodes determines the flow of information from one node to another. A DAG is a useful structure for modeling data sets, such as queues and trees, to represent complex algorithms and processes in computer engineering. Distributed Computing: A type of computing in which different computing resources, such as memory, hard disk space, and processor time are divided between different computers working as a single system. This gives the benefits of distributed computing like scalability, load balancing, reduced latency, and improved resiliency. It is widely used in data centers and cloud computing. Distributed Consensus: The process of having a distributed system come to agreement on the state of an issue or order of operations. This is achieved through communication, verification and agreement from each of the connected nodes in the system. Distributed Database: A distributed database is a type of database which stores different parts of its data across multiple devices that are connected to computer networks, allowing for more efficient data access by multiple users and more efficient transfer of data between different locations. Distributed File System: A distributed file system is a file system that allows multiple computers to access and share files located on remote servers. It enables a computing system that consists of multiple computers to work together in order to store and manage data. The system can be seen as a large-scale, decentralized file storage platform that spans multiple nodes. Data is replicated across the computers so that if one computer goes down, another can take its place, which helps provide for high availability, scalability, and fault tolerance. Distributed Ledger Technology: A type of database technology that maintains a continuously growing list of records, each stored as a block and protected by cryptography. It is a database shared across multiple nodes in a network, that keeps track of digital transactions between the nodes and is protected by airtight security and tamper proof mechanisms. Transaction records are constantly updated and can be easily accessed. Distributed Ledger Technology Platforms: A distributed ledger technology (DLT) platform is a secure digital platform that allows data to be securely shared and stored in a decentralized manner across a wide range of nodes within a network. DLT platforms provide a distributed ledger solution that is tamper-resistant, secured using cryptographic encryption protocols, and can remain resilient even in the face of malicious actors. DLT platforms also enable secure and timely data exchanges, providing scalability, reliability and transparency for transactions. Distributed Storage Services: Services that allow users to store data across multiple physical storage locations. This increases reliability and availability of the data and allows for distributed workloads to take advantage of this. Distributed Storage System: A distributed storage system is a series of computer systems which interact together to manage, store, and back up massive amounts of data in a reliable and secure way. A distributed storage system is more fault-tolerant than conventional storage systems because its components are not vulnerable to a single point of failure. This allows a distributed storage system to provide higher levels of data durability and availability than a single, centralized system. Distributed Systems: A type of computing system consisting of multiple, independent components that can communicate with each other to coordinate and synchronize their actions, creating a unified system as a whole. It is a type of software architecture that is designed to maximize resources across multiple computers connected over a network. Distributed Systems Architecture: A distributed system is an interconnection of multiple independent computers that coordinate their activities and share the resources of a network, usually communicating via a message-passing interface or remote procedure calls. Distributed systems architecture includes design guidelines and approaches to ensure a system’s resilience, performance, scalability, availability, and security. Erasure Coded File Storage: Erasure coded file storage is an archive strategy that divides data into portions, and then encodes each portion multiple times using various error-correction algorithms. This redundancy makes erasure coding valuable for permanently storing critical data as it increases reliability but reduces storage costs. Fault Tolerance: Fault tolerance is the capability of a system to continue its normal operation despite the presence of hardware or software errors, disruptions, and even loss of components or data. Fault tolerant systems are designed to be resilient to failure and to continue normal operation in the event of a partial or total system failure. Gossip Protocol: A distributed algorithm in which each member of a distributed system periodically communicates a message to one or more nodes, which then send the same message onto other nodes, until all members of the network receive the same message. This allows for a system to remain aware of all other nodes and records in the system without the need for a central server. Hashgraph: A distributed ledger platform that uses a virtual voting algorithm to achieve consensus faster than a traditional distributed ledger. This system allows for the secure transfer of digital objects and data without needing a third party for authentication. Its consensus algorithm is based on a gossip protocol, where user nodes are able to share news and updates with each other. Hyperledger Fabric: Hyperledger Fabric is an open source software project that provides a foundation for developing applications that use distributed ledgers. It allows for secure and permissioned transactions between multiple businesses, enabling an ecosystem of participants to securely exchange data and assets. It provides support for smart contracts, digital asset management, encryption, and identity services. Immutability: The capacity of an object to remain unchanged over time, even after multiple operations and modifications. Additionally, immutability is a property by which the object remains unchanged, and all operations on that object return a new instance instead of modifying the original. Interoperability: The ability of systems or components to work together and exchange data, even though they may be from different manufacturers and have different technical specifications. Key Management: Key management is the overall management of a set of cryptographic keys used to protect data both in transit over a network and in storage. Proper key management encryption and control ensure the secure communication within applications and systems and provides a baseline for protecting sensitive information. Key managers are responsible for the storage, rotation, and renewal of encryption keys and must ensure that the data is secure against unauthorized disclosure, modification, inclusion, or loss. Merkle Tree: A Merkle tree is a tree-based data structure used for verifying the integrity of large sets of data. It consists of a parent-child relationship of data nodes, with each block in the tree taking the data of all child blocks, hashing it together, and then creating a hash of its own. This process is repeated until a single block remains, which creates a hierarchical and hashed structure. Multi-Node Clusters: A type of computing architecture, composed of multiple connected computers, or nodes, that work together and are able to act as a single system. A multi-node cluster allows for increased information processing and storage capacity, as well as increased reliability and availability. Nodes: A node is a basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Nodes are used to implement graphs, linked lists, trees, stacks, and queues. They are also used to represent algorithms and data structures in computer science. Oracles: An oracle is a system designed to provide users with a result of a query. These systems are often relied on to deliver accurate and reliable conclusions to users, based on the data they have provided. There have been several generations of oracles, ranging from hardware to software-based systems, each of which has different sets of capabilities. Orchestration: Orchestration is the process of using software automation to manage and configure cloud-based computing services. It is used to automate the management and deployment of workloads across multiple cloud platforms, allowing organizations to gain efficiency and scalability. Peer-to-Peer Networking: Peer-to-peer networking is a type of network architecture model where computers (or peer nodes) are connected together in such a way that each node can act as a client or a server for the other nodes in the network. In other words, it does not rely on a central server to manage the communication between the connected peers in the network. Proof of Stake (PoS): A consensus mechanism in blockchain technology which allows nodes to validate transactions and produce new blocks according to the amount of coins each node holds. It is an alternative to the Proof of Work (PoW) consensus protocol. In PoS, validators stake their coins, meaning they have to deposit coins with the blockchain protocol before they can validate blocks. Validators receive rewards for creating blocks and are penalized for malicious behavior. Proof of Storage (PoSt): Proof of Storage is a consensus cryptographic mechanism which is used to attest to the storage of data in a distributed storage network. The consensus model requires a randomly chosen subset of storage miners to periodically provide cryptographic proofs that they are storing data correctly and that all necessary nodes are online. The goal of PoSt is to ensure that data stored is secure, as well as increase the security and transparency of distributed storage networks. Proof of Work (PoW): A consensus algorithm by which a set of data called a “block” is validated by solving computationally-taxing mathematical problems. It is used as a security measure in blockchains as each block that is created must have a valid PoW for the block to be accepted. The difficultly of the computational problems get more difficult with time as the blockchain grows, incentivizing the participants to continue to keep the chain running. Proof-of-Stake (PoS): A consensus mechanism used in certain distributed ledger system, where validator nodes participate in block validation with a combination of network participation and staking of token or other resources. The Proof-of-Stake consensus is an alternative to Proof-of-Work (PoW) used in many other blockchain networks. Proof-of-Work: A concept used by blockchain networks to ensure consensus by requiring a certain amount of effort or work to make sure the blocks of data in the chain are legitimate and valid. This is done by large amounts of computational work, typically hashing algorithms. Public Distributed Ledger: A public distributed ledger is a decentralized database that holds information about all the transactions that have taken place across the network, distributed and maintained by all participants without the need for a central authority to manage and validate it. All participants in the network can access, view, and validate the data stored on the ledger. Public Key Infrastructure (PKI): A set of protocols, services, and standards that manage, distribute, identify and authenticate public encryption keys associated with users, computers, and organizations in a network. PKI allows secure communication and ensures that only the intended recipient can read the message. QuickP2P: A type of networking protocol focused on peer-to-peer (P2P) computing that allows users to exchange files and data quickly across multiple computers. QuickP2P works by breaking the files into small blocks, which can then be rapidly downloaded separately in parallel by the searching user. Quorum: The minimum number of nodes in a distributed system that must be engaged in order to reach a consensus. Quorum value is set based on the number of nodes in the system and must be greater than half of the total nodes present in the system. Replication: The process of creating redundant copies of data or objects in distributed systems for the purpose of fault-tolerant data storage or network operations. Routing Protocols: Protocols that direct traffic between computers across a network or the Internet, determining the routing path for data packets and exchanging information about available routes between routers. Routing protocols define the way routers communicate with each other to exchange network information and configure the best path for data traffic. Scalability: The ability of a system to increase its performance or capacity when given additional resources such as additional computing, memory, data storage, network bandwidth and power. Scalability is an important consideration for developing systems that must handle increasing amounts of data, workload or users. Secure Messaging: Secure messaging is the process of sending or receiving encrypted messages between two or more parties, with the intent of ensuring that only the intended recipient can access the contents of the message. The encryption process generally involves the use of public and private keys, making it secure and nearly impossible for anyone else to intercept any messages sent over the network. Secure Multi-Party Computation (MPC): A computer security framework that allows several parties to jointly compute a function over their private inputs without revealing anything other than their respective outputs to the other parties. MPC leverages the security of cryptography in order to achieve privacy and security in computation. Security Protocols: Security protocols are systems of standard rules and regulations created by computer networks to protect data and enable secure communication between devices. They are commonly used for authentication, encryption, confidentiality and data integrity. Smart Contracts: A type of protocol that is self-executing, autonomously carrying out the terms of an agreement between two or more parties when predetermined conditions are met. They are used to exchange money, assets, shares, or anything of value without the need for a third party or intermediary in a secure and trustless manner. Smart contracts are used widely within blockchain-based applications to reduce risk and increase speed and accuracy. Smart Contracts: Contracts written in computer code that are stored and executed on a blockchain network. Smart contracts are self-executing and contain the terms of an agreement between two or more parties that are written directly into the code. Smart contracts are irreversible and are enforced without the need for a third-party. State Machines: A state machine is a system composed of transitioning states, where each state is determined by the previous state and the current inputs. Each state has attached conditions and outputs, and when a the previous state and input conditions match the conditions of the current state, a single output will be generated. State machines are commonly used in computer engineering for finite automation. Synchronization Methods: Methods of ensuring that different parts of a distributed application or system are working from a shared same set of data at a given point in time. This can be achieved by actively sending data among components or by passively waiting for components to ask for data before sending it. Synchronous Byzantine Fault Tolerance (SBFT): A consensus algorithm for blockchain networks that requires each node to be connected and active for all network messages to be exchanged and validated in a given time period. This algorithm provides a way for a distributed network to come to consensus, even when some participants may be compromised or malicious. Time-Stamping: The technique of assigning a unique and precise time value to all events stored in a record in order to define an chronological order of those events. Virtual Voting: Digital voting system where citizens are able to cast their vote over the internet, directly or through a voting portal. It has been used as an alternative to physical voting polls, particularly during the pandemic. It can also be used to verify the accuracy and security of elections. Web 3.0: The Web 3.0 concept refers to the newest generation of the internet. It is highly decentralized and automated, based on AI and distributed ledger technologies such as blockchain. It allows for a more open and secure infrastructure for data storage, authentication, and interactions between devices across the web. It has the potential to be much smarter, faster, and more efficient than its predecessor Web 2.0.\
shdwDrive v1.5 is no longer maintained. Please migrate to v2 and consult the new developer guide for instructions.
makeStorageImmutable (updated)
reduceStorage (updated)
refreshStake (new)
topUp (new)
Let's scaffold a React app and add our dependencies
npx create-react-app shdwapp
cd shdwapp/
yarn add @shadow-drive/sdk @project-serum/anchor \
@solana/wallet-adapter-base \
@solana/wallet-adapter-react \
@solana/wallet-adapter-react-ui \
@solana/wallet-adapter-wallets \
@solana/web3.js \
@solana-mobile/wallet-adapter-mobile
Review the Solana Web3.js SDK and Solana API resources.
Use the Solana docs and examples here if you need help. We're going to focus on ShdwDrive SDK in these docs, so if you need a primer on how to build a React site with Solana, we can refer you to other resources.
Solana example code:
Let's start by instantiating the ShdwDrive connection class object. This will have all ShdwDrive methods and it implements the signing wallet within the class for all transactions.
At the simplest level, it is recommend for a React app to immediately try to load a connection to a user's ShdwDrives upon wallet connection. This can be done with the useEffect
React hook.
import React, { useEffect } from "react";
import * as anchor from "@project-serum/anchor";
import {ShdwDrive} from "@shadow-drive/sdk";
import { useWallet, useConnection } from "@solana/wallet-adapter-react";
export default function Drive() {
const { connection } = useConnection();
const wallet = useWallet();
useEffect(() => {
(async () => {
if (wallet?.publicKey) {
const drive = await new ShdwDrive(connection, wallet).init();
}
})();
}, [wallet?.publicKey])
return (
<div></div>
This can be done with a NodeJS + TypeScript program as well.
const anchor = require("@project-serum/anchor");
const { Connection, clusterApiUrl, Keypair } = require("@solana/web3.js");
const { ShdwDrive } = require("@shadow-drive/sdk");
const key = require("./shdwkey.json");
async function main() {
let secretKey = Uint8Array.from(key);
let keypair = Keypair.fromSecretKey(secretKey);
const connection = new Connection(
clusterApiUrl("mainnet-beta"),
"confirmed"
);
const wallet = new anchor.Wallet(keypair);
const drive = await new ShdwDrive(connection, wallet).init();
}
main();
This implementation is effectively the same for both Web and Node implementations. There are three params that are required to create a storage account:
name
: a friendly name for your storage account
size
: The size of your storage accounts with a human readable ending containing KB
, MB
, or GB
version
: can be either v1
or v2
. Note - v1
is completely deprecated and you shuold only use v2
moving forward.
//create account
const newAcct = await drive.createStorageAccount("myDemoBucket", "10MB", "v2");
console.log(newAcct);
This implementation is effectively the same for both Web and Node implementations. The only parameter required is either v1
or v2
for the version of storage account you created in the previous step.
const accts = await drive.getStorageAccounts("v2");
// handle printing pubKey of first storage acct
let acctPubKey = new anchor.web3.PublicKey(accts[0].publicKey);
console.log(acctPubKey.toBase58());
Full Response:
This implementation is effectively the same for both Web and Node implementations. The only parameter required is either a PublicKey object or a base-58 string of the public key.
const acct = await drive.getStorageAccount(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
console.log(acct);
Full Response:
The uploadFile
method requires two parameters:
key
: A PublicKey object representing the public key of the Shdw Storage Account
data
: A file of either the File
object type or ShadowFile
object type
Check the intellisense popup below when hovering over the method
File
objects are implemented in web browsers, and ShadowFile
is a custom type we implemented in TypeScript. So either you are using File
in the web, or you are scripting in TS.
Here is an example with a React Component:
And a NodeJS + TypeScript implementation would look like:
This is a nearly identical implementation to uploadFile, except that it requires a FileList
or array of ShadowFiles
and an optional concurrency parameter.
Recall that the default setting is to attempt to upload 3 files concurrently. Here you can override this and specify how many files you want to try to upload based on the cores and bandwith of your infrastructure.
The implementation of deleteFile
is the same between web and Node. There are three required parameters to delete a file:
key
: the storage account's public key
url
: the current URL of the file to be deleted
version
: can be either v1
or v2
const url =
"https://shdw-drive.genesysgo.net/4HUkENqjnTAZaUR4QLwff1BvQPCiYkNmu5PPSKGoKf9G/fape.png";
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const delFile = await drive.deleteFile(acctPubKey, url, "v2");
console.log(delFile);
The editFile method is a combo of uploadFile
and deleteFile
. Let's look at the params:
key
: the Public Key of the storage account
url
: the URL of the file that is being replaced
data
: the file that is replacing the current file. It must have the exact same filename and extension, and it must be a File
or ShadowFile
object
version
: either v1
or v2
const fileToUpload: ShadowFile = {
name: "mytext.txt",
file: fileBuff,
};
const url =
"https://shdw-drive.genesysgo.net/4HUkENqjnTAZaUR4QLwff1BvQPCiYkNmu5PPSKGoKf9G/fape.png";
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const editFile = await drive.editFile(acctPubKey, url, "v2", fileToUpload);
This is a simple implementation that only requires a public key to get the file names of a storage account.
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const listItems = await drive.listObjects(acctPubKey);
console.log(listItems);
And the response payload:
{ keys: [ 'index.html' ] }
This is a method to simply increase the storage limit of a storage account. It requires three params:
key
: storage account public key
size
: amount to increase by, must end with KB
, MB
, or GB
version
: storage account version, must be v1
or v2
const accts = await drive.getStorageAccounts("v2");
let acctPubKey = new anchor.web3.PublicKey(accts[1].publicKey);
const addStgResp = await drive.addStorage(acctPubKey, "10MB", "v2");
This is a method to decrease the storage limit of a storage account. This implementation only requires three params - the storage account key, the amount to reduce it by, and the version.
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const shrinkAcct = await drive.reduceStorage(acctPubKey, "10MB", "v2");
This method allows you to reclaim the SHDW that is no longer being used. This method only requires a storage account public key and a version.
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const claimStake = await drive.claimStake(acctPubKey, "v2");
As the name implies, you can delete a storage account and all of its files. The storage account can still be recovered until the current epoch ends, but after that, it will be removed. This implementation only requires two params - a storage account key and a version.
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const delAcct = await drive.deleteStorageAccount(acctPubKey, "v2");
You can still get your storage account back if the current epoch hasn't elapsed. This implementation only requires two params - an account public key and a version.
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const cancelDelStg = await drive.cancelDeleteStorageAccount(acctPubKey, "v2");
constructor
This method is used to create a new instance of the ShdwDrive class. It accepts a web3 connection object and a web3 wallet. It returns an instance of the ShdwDrive class.
connection
: Connection
- initialized web3 connection object
wallet
: any
- Web3 wallet
It returns an instance of the ShdwDrive class.
const shadowDrive = new ShadowDrive(connection, wallet).init();
// Javascript SDK example using the constructor method
// this creates a new instance of the ShadowDrive class and initializes it with the given connection and wallet parameters
const shadowDrive = new ShadowDrive(connection, wallet).init();
addStorage
addStorage
is a method of the ShadowDrive
class defined in index.ts
at line 121. It takes three parameters: key
, size
, and version
and returns a Promise<ShadowDriveResponse>
with the confirmed transaction ID.
key
: PublicKey
- Public Key of the existing storage to increase size on
size
: string
- Amount of storage you are requesting to add to your storage account. Should be in a string like '1KB', '1MB', '1GB'. Only KB, MB, and GB storage delineations are supported currently.
version
: ShadowDriveVersion
- ShadowDrive version (v1 or v2)
Confirmed transaction ID
{
message: string;
transaction_signature?: string
}
const accts = await drive.getStorageAccounts("v2");
let acctPubKey = new anchor.web3.PublicKey(accts[1].publicKey);
const addStgResp = await drive.addStorage(acctPubKey, "10MB", "v2");
// Javascript SDK example using the addStorage method
// This line retrieves the storage accounts with version "v2" using the `getStorageAccounts` method of the `drive` object and stores them in the `accts` variable.
const accts = await drive.getStorageAccounts("v2")
// This line creates a new `PublicKey` object using the public key of the second storage account retrieved in the previous line and stores it in the `acctPubKey` variable.
let acctPubKey = new anchor.web3.PublicKey(accts[1].publicKey)
// This line adds a new storage allocation of size "10MB" and version "v2" to the storage account identified by the public key in `acctPubKey`. The response is stored in the `addStgResp` variable.
const addStgResp = await drive.addStorage(acctPubKey,"10MB","v2"ca
cancelDeleteStorageAccount
Implementation of cancelDeleteStorageAccount defined in index.ts:135 This method is used to cancel a delete request for a Storage Account on ShdwDrive. It accepts a Public Key of the Storage Account and the ShdwDrive version (v1 or v2). It returns a Promise<{ txid: string }> containing the confirmed transaction ID of the undelete request.
key
: PublicKey
- Publickey
Confirmed transaction ID
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const cancelDelStg = await drive.cancelDeleteStorageAccount(acctPubKey, "v2");
// Javascript SDK example using the cancelDeleteStorageAccount method
// Create a new public key object from a string representation of a Solana account public key
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
// Call the "cancelDeleteStorageAccount" function of the ShdwDrive API, passing in the account public key object and a string indicating the storage account version to cancel deletion for
const cancelDelStg = await drive.cancelDeleteStorageAccount(acctPubKey, "v2");
claimStake
This method is used to request a Stake on ShdwDrive. It accepts a PublicKey of the Storage Account and the ShdwDrive version (v1 or v2). It returns a Promise<{ txid: string }> containing the confirmed transaction ID of the claimStake request.
key
: PublicKey
- Publickey of Storage Account
version
: `ShadowDrive
Confirmed transaction ID
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const claimStake = await drive.claimStake(acctPubKey, "v2");
// Javascript SDK example using the claimStake method
// Create a new public key object with the specified value
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
// Call the 'claimStake' function on the 'drive' object with the account public key and 'v2' as parameters, and wait for its completion before proceeding
const claimStake = await drive.claimStake(acctPubKey, "v2");
createStorageAccount
Implementation of ShdwDrive.createStorageAccount defined in index.ts:120 This method is used to create a new Storage Account on ShdwDrive. It accepts the name of the Storage Account, the size of the requested Storage Account, and the ShdwDrive version (v1 or v2). It also accepts an optional secondary owner for the Storage Account. It returns a Promise containing the created Storage Account and the transaction signature.
name
: string
- What you want your storage account to be named. (Does not have to be unique)
size
: string
- Amount of storage you are requesting to create. Should be in a string like '1KB', '1MB', '1GB'. Only KB, MB, and GB storage delineations are supported currently.
version
: ShadowDriveVersion
- ShdwDrive version(v1 or v2)
owner2
(optional): PublicKey
- Optional secondary owner for the storage account.
{
"shdw_bucket": String,
"transaction_signature": String
}
//create account
const newAcct = await drive.createStorageAccount("myDemoBucket", "10MB", "v2");
console.log(newAcct);
// Javascript SDK example using the createStorageAccount method
// Calls the 'createStorageAccount' function on the 'drive' object with "myDemoBucket", "10MB", and "v2" as parameters, and waits for its completion before proceeding. The result of the function call is assigned to the 'newAcct' variable.
const newAcct = await drive.createStorageAccount("myDemoBucket", "10MB", "v2");
// Logs the value of the 'newAcct' variable to the console
console.log(newAcct);
deleteFile
This method is used to delete a file on ShdwDrive. It accepts a Public Key of your Storage Account, the ShdwDrive URL of the file you are requesting to delete and the ShdwDrive version (v1 or v2). It returns a Promise containing the confirmed transaction ID of the delete request.
key
: PublicKey
- Publickey of Storage Account
url
: string
- ShdwDrive URL of the file you are requesting to delete.
version
: `ShdwDrive
Version` - ShdwDrive version (v1 or v2)
{
"message": String,
"error": String or not passed if no error
}
const url =
"https://shdw-drive.genesysgo.net/4HUkENqjnTAZaUR4QLwff1BvQPCiYkNmu5PPSKGoKf9G/fape.png";
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const delFile = await drive.deleteFile(acctPubKey, url, "v2");
console.log(delFile);
// Javascript SDK example using the deleteFile method
// Assigns a string value containing the URL of the file to be deleted to the 'url' variable
const url =
"https://shdw-drive.genesysgo.net/4HUkENqjnTAZaUR4QLwff1BvQPCiYkNmu5PPSKGoKf9G/fape.png";
// Creates a new public key object with a specific value and assigns it to the 'acctPubKey' variable
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
// Calls the 'deleteFile' function on the 'drive' object with the account public key, URL, and "v2" as parameters, and waits for its completion before proceeding. The result of the function call is assigned to the 'delFile' variable.
const delFile = await drive.deleteFile(acctPubKey, url, "v2");
// Logs the value of the 'delFile' variable to the console
console.log(delFile);
deleteStorageAccount
Implementation of ShadowDrive.deleteStorageAccount defined in index.ts:124 This method is used to delete a Storage Account on ShdwDrive. It accepts a Public Key of the Storage Account and the ShdwDrive version (v1 or v2). It returns a Promise<{ txid: string }> containing the confirmed transaction ID of the delete request.
key
: PublicKey
- Publickey of a Storage Account
version
: ShadowDriveVersion
- ShdwDrive version (v1 or v2)
Confirmed transaction ID
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const delAcct = await drive.deleteStorageAccount(acctPubKey, "v2");
// Javascript SDK example using the deleteStorageAccount method
// Creates a new public key object with a specific value and assigns it to the 'acctPubKey' variable
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
// Calls the 'deleteStorageAccount' function on the 'drive' object with the account public key and "v2" as parameters, and waits for its completion before proceeding. The result of the function call is assigned to the 'delAcct' variable.
const delAcct = await drive.deleteStorageAccount(acctPubKey, "v2");
editFile
This method is used to edit a file on ShdwDrive. It accepts a Public Key of your Storage Account, the URL of the existing file, the File or ShadowFile object, and the ShdwDrive version (v1 or v2). It returns a Promise containing the file location and the transaction signature.
key
: PublicKey
- Publickey of Storage Account
url
: string
- URL of existing file
data
: File | ShadowFile
- File or ShadowFile object, file extensions should be included in the name property of ShadowFiles.
version
: ShadowDriveVersion
- ShdwDrive version (v1 or v2)
{
finalized_location: string;
}
const fileToUpload: ShadowFile = {
name: "mytext.txt",
file: fileBuff,
};
const url =
"https://shdw-drive.genesysgo.net/4HUkENqjnTAZaUR4QLwff1BvQPCiYkNmu5PPSKGoKf9G/fape.png";
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const editFile = await drive.editFile(acctPubKey, url, "v2", fileToUpload);
// Javascript SDK example using the editFile method
// Creates an object containing the name and content of the file to upload and assigns it to the 'fileToUpload' variable
const fileToUpload: ShadowFile = {
name: "mytext.txt",
file: fileBuff,
};
// Assigns a string value containing the URL of the file to be edited to the 'url' variable
const url =
"https://shdw-drive.genesysgo.net/4HUkENqjnTAZaUR4QLwff1BvQPCiYkNmu5PPSKGoKf9G/fape.png";
// Creates a new public key object with a specific value and assigns it to the 'acctPubKey' variable
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
// Calls the 'editFile' function on the 'drive' object with the account public key, URL, "v2", and the file object as parameters, and waits for its completion before proceeding. The result of the function call is assigned to the 'editFile' variable.
const editFile = await drive.editFile(acctPubKey, url, "v2", fileToUpload);
getStorageAccount
This method is used to get the details of a Storage Account on ShdwDrive. It accepts a Public Key of the Storage Account and returns a Promise containing the Storage Account details.
key
: PublicKey
- Publickey of a Storage Account
{
storage_account: PublicKey;
reserved_bytes: number;
current_usage: number;
immutable: boolean;
to_be_deleted: boolean;
delete_request_epoch: number;
owner1: PublicKey;
account_counter_seed: number;
creation_time: number;
creation_epoch: number;
last_fee_epoch: number;
identifier: string;
version: `${Uppercase<ShadowDriveVersion>}`;
}
const acct = await drive.getStorageAccount(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
console.log(acct);
// Javascript SDK example using the getStorageAccount method
// Calls the 'getStorageAccount' function on the 'drive' object with the account public key as a parameter, and waits for its completion before proceeding. The result of the function call is assigned to the 'acct' variable.
const acct = await drive.getStorageAccount(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
// Logs the resulting object to the console
console.log(acct);
getStorageAccounts
This method is used to get a list of all the Storage Accounts associated with the current user. It accepts a ShdwDrive version (v1 or v2). It returns a Promise<StorageAccountResponse[]> containing the list of storage accounts.
version
: ShadowDriveVersion
- ShdwDrive version (v1 or v2)
{
publicKey: anchor.web3.PublicKey;
account: StorageAccount;
}
ShadowDrive.getStorageAccounts(shadowDriveVersion)
.then((storageAccounts) =>
console.log(`List of storage accounts: ${storageAccounts}`)
)
.catch((err) => console.log(`Error getting storage accounts: ${err}`));
// Javascript SDK example using the getStorageAccounts method
// Calls the 'getStorageAccounts' function on the 'drive' object with the version parameter "v2", and waits for its completion before proceeding. The result of the function call is assigned to the 'accts' variable.
const accts = await drive.getStorageAccounts("v2");
// Uses the 'let' keyword to declare a variable 'acctPubKey', which is assigned the value of the publicKey of the first object in the 'accts' array. This value is converted to a string in Base58 format.
let acctPubKey = new anchor.web3.PublicKey(accts[0].publicKey);
console.log(acctPubKey.toBase58());
listObjects
This method is used to list the Objects in a Storage Account on ShdwDrive. It accepts a Public Key of the Storage Account and returns a Promise containing the list of Objects in the Storage Account.
storageAccount
: PublicKey
{
keys: string[];
}
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const listItems = await drive.listObjects(acctPubKey);
console.log(listItems);
// Javascript SDK example using the listObjects method
// Creates a new 'PublicKey' object using a specific public key string and assigns it to the 'acctPubKey' variable.
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
// Calls the 'listObjects' function on the 'drive' object with the 'acctPubKey' variable as a parameter, and waits for its completion before proceeding. The result of the function call is assigned to the 'listItems' variable.
const listItems = await drive.listObjects(acctPubKey);
// Logs the resulting object to the console.
console.log(listItems);
makeStorageImmutable
This method is used to make a Storage Account immutable on ShdwDrive. It accepts a Public Key of the Storage Account and the ShdwDrive version (v1 or v2). It returns a Promise containing the confirmed transaction ID of the makeStorageImmutable request.
key
: PublicKey
- Publickey of Storage Account
version
: ShadowDriveVersion
- ShdwDrive version (v1 or v2)
{
message: string;
transaction_signature?: string;
}
const key = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const result = await drive.makeStorageImmutable(key, "v2");
console.log(result);
// Javascript SDK example using the makeStorageImmutable method
// Create a new PublicKey object using a public key string.
const key = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
// Call the makeStorageImmutable function with the PublicKey object and a version string, and wait for it to complete.
const result = await drive.makeStorageImmutable(key, "v2");
// Log the resulting object to the console.
console.log(result);
migrate
This method is used to migrate a Storage Account on ShdwDrive. It accepts a PublicKey of the Storage Account. It returns a Promise<{ txid: string }> containing the confirmed transaction ID of the migration request.
key
: PublicKey
- Publickey of Storage Account
Confirmed transaction ID
const result = await drive.migrate(key);
// Javascript SDK example using the migrate method
// Call the migrate function on the drive object, passing in the PublicKey object as a parameter.
const result = await drive.migrate(key);
redeemRent
This method is used to redeem Rent on ShdwDrive. It accepts a Public Key of the Storage Account and the Public Key of the file account to close. It returns a Promise<{ txid: string }> containing the confirmed transaction ID of the redeemRent request.
key
: PublicKey
- Publickey of Storage Account
fileAccount
: PublicKey
- PublicKey of the file account to close
Confirmed transaction ID
const fileAccount = new anchor.web3.PublicKey(
"3p6U9s1sGLpnpkMMwW8o4hr4RhQaQFV7MkyLuW8ycvG9"
);
const result = await drive.redeemRent(key, fileAccount);
// Javascript SDK example using the redeemRent method
// Create a new PublicKey object using a public key string for the file account.
const fileAccount = new anchor.web3.PublicKey(
"3p6U9s1sGLpnpkMMwW8o4hr4RhQaQFV7MkyLuW8ycvG9"
);
// Call the redeemRent function on the drive object, passing in both PublicKey objects as parameters.
const result = await drive.redeemRent(key, fileAccount);
reduceStorage
This method is used to reduce the storage of a Storage Account on ShdwDrive. It accepts a Public Key of the Storage Account, the amount of storage you are requesting to reduce from your storage account, and the ShdwDrive version (v1 or v2). It returns a Promise containing the confirmed transaction ID of the reduce storage request.
key
: PublicKey
- Publickey of Storage Account
size
: string
- Amount of storage you are requesting to reduce from your storage account. Should be in a string like '1KB', '1MB', '1GB'. Only KB, MB, and GB storage delineations are supported currently.
version
: ShadowDriveVersion
- ShdwDrive version (v1 or v2)
{
message: string;
transaction_signature?: string;
}
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const shrinkAcct = await drive.reduceStorage(acctPubKey, "10MB", "v2");
// Javascript SDK example using the reduceStorage method
// Create a new public key object with the given string
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
// Reduce the storage size of the storage account with the given public key
// to 10MB using the version specified
const shrinkAcct = await drive.reduceStorage(acctPubKey, "10MB", "v2");
storageConfigPDA
This exposes the PDA account in case developers have a need to display / use the data stored in the account.
key
: PublicKey
- Publickey of Storage Account
data
: File | ShadowFile
- File or ShadowFile object, file extensions should be included in the name property of ShadowFiles.
Public Key
storageConfigPDA: PublicKey;
//storageConfigPDA is a method in the Shdw SDK that returns the public key of the program derived account (PDA) for the Shdw storage program's config. A program derived account is a special account on the Solana blockchain that is derived from a program's public key and a specific seed. The purpose of this method is to provide a convenient way to obtain the PDA for the Shdw storage program's config. The config contains important information such as the current storage rent exemption threshold and the data size limits for storage accounts. This public key can be used to interact with the Shdw storage program's config account, allowing the user to retrieve and modify the program's global configuration settings.
storageConfigPDA: PublicKey;
refreshStake
This method is used to update your storage account's stake amount. It is required to call this method after calling the `topUp` method in order for your stage account to update properly.
key
: PublicKey
- Publickey of the Storage Account
version
: can be either v1
or v2
. Note - v1
is completely deprecated and you shuold only use v2
moving forward.
{
txid: string
}
topUp
This method is used to top up a storage account's $SHDW balance to cover any necessary fees, like mutable storage fees which are collected every epoch. It is necessary to call the `refreshStake` method after this.
key
: PublicKey
- Publickey of the Storage Account
amount
: Number
- Amount of $SHDW to transfer to the stake account
{
txid: string;
}
uploadFile
This method is used to upload a file to ShdwDrive. It accepts a Public Key of your Storage Account and a File or ShadowFile object. The file extensions should be included in the name property of ShadowFiles. It returns a Promise containing the file location and the transaction signature.
key
: PublicKey
- Publickey of Storage Account.
data
: File | ShadowFile
- File or ShadowFile object, file extensions should be included in the name property of ShadowFiles.
{
finalized_locations: Array<string>;
message: string;
upload_errors: Array<UploadError>;
}
const uploadFile = await drive.uploadFile(acctPubKey, fileToUpload);
// Javascript SDK example of the uploadFile method
// This line calls the uploadFile method of the drive object and passes in two parameters:
// 1. acctPubKey: A PublicKey object representing the public key of the storage account where the file will be uploaded.
// 2. fileToUpload: A ShadowFile object containing the file name and file buffer to be uploaded.
const uploadFile = await drive.uploadFile(acctPubKey, fileToUpload);
uploadMultipleFiles
This method is used to upload multiple files to a Storage Account on ShdwDrive. It accepts the Storage Account's PublicKey, a data object containing the FileList or ShadowFile array of files to upload, an optional concurrent number for the number of files to concurrently upload, and an optional callback function for every batch of files uploaded. It returns a Promise<ShadowBatchUploadResponse[]> containing the file names, locations and transaction signatures for uploaded files.
key
: PublicKey
- Storage account PublicKey to upload the files to.
data
: FileList | ShadowFile[]
-
concurrent
(optional): number
- Number of files to concurrently upload. Default: 3
callback
(optional): Function
- Callback function for every batch of files uploaded. A number will be passed into the callback like callback(num) indicating the number of files that were confirmed in that specific batch.
{
fileName: string;
status: string;
location: string;
}
const drive = new ShadowDrive();
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
const files = [
{
name: "file1.txt",
file: new File(["hello"], "file1.txt"),
},
{
name: "file2.txt",
file: new File(["world"], "file2.txt"),
},
{
name: "file3.txt",
file: new File(["!"], "file3.txt"),
},
];
const concurrentUploads = 2;
const callback = (response) => {
console.log(`Uploaded file ${response.fileIndex}: ${response.fileName}`);
};
const responses = await drive.uploadMultipleFiles(
acctPubKey,
files,
concurrentUploads,
callback
);
console.log(responses);
// Javascript SDK example of the uploadMultipleFiles method
// Create an instance of the ShdwDrive client
const drive = new ShadowDrive();
// Define the public key of the storage account where the files will be uploaded
const acctPubKey = new anchor.web3.PublicKey(
"EY8ZktbRmecPLfopBxJfNBGUPT1LMqZmDFVcWeMTGPcN"
);
// Define an array of files to upload
const files = [
{
name: "file1.txt",
file: new File(["hello"], "file1.txt"),
},
{
name: "file2.txt",
file: new File(["world"], "file2.txt"),
},
{
name: "file3.txt",
file: new File(["!"], "file3.txt"),
},
];
// Define the maximum number of concurrent uploads (optional)
const concurrentUploads = 2;
// Define a callback function to be called after each file is uploaded (optional)
const callback = (response) => {
console.log(`Uploaded file ${response.fileIndex}: ${response.fileName}`);
};
// Call the uploadMultipleFiles method to upload all the files
const responses = await drive.uploadMultipleFiles(
acctPubKey,
files,
concurrentUploads,
callback
);
// Print the responses returned by the server for each file uploaded
console.log(responses);
userInfo
userInfo: PublicKey
// Import required modules and constants
import * as anchor from "@project-serum/anchor";
import { getStakeAccount, findAssociatedTokenAddress } from "../utils/helpers";
import {
emissions,
isBrowser,
SHDW_DRIVE_ENDPOINT,
tokenMint,
uploader,
} from "../utils/common";
import {
ASSOCIATED_TOKEN_PROGRAM_ID,
TOKEN_PROGRAM_ID,
} from "@solana/spl-token";
import { ShadowDriveVersion, ShadowDriveResponse } from "../types";
import fetch from "node-fetch";
/**
*
* @param {anchor.web3.PublicKey} key - Publickey of a Storage Account
* @param {ShadowDriveVersion} version - ShadowDrive version (v1 or v2)
* @returns {ShadowDriveResponse} - Confirmed transaction ID
*/
export default async function makeStorageImmutable(
key: anchor.web3.PublicKey,
version: ShadowDriveVersion
): Promise<ShadowDriveResponse> {
let selectedAccount;
// Fetch the selected account based on the version
try {
switch (version.toLocaleLowerCase()) {
case "v1":
selectedAccount = await this.program.account.storageAccount.fetch(key);
break;
case "v2":
selectedAccount = await this.program.account.storageAccountV2.fetch(
key
);
break;
}
// Find associated token addresses
const ownerAta = await findAssociatedTokenAddress(
selectedAccount.owner1,
tokenMint
);
const emissionsAta = await findAssociatedTokenAddress(emissions, tokenMint);
// Get stake account
let stakeAccount = (await getStakeAccount(this.program, key))[0];
let txn;
// Create transaction based on the version
switch (version.toLocaleLowerCase()) {
case "v1":
txn = await this.program.methods
.makeAccountImmutable()
.accounts({
storageConfig: this.storageConfigPDA,
storageAccount: key,
stakeAccount,
emissionsWallet: emissionsAta,
owner: selectedAccount.owner1,
uploader: uploader,
ownerAta,
tokenMint: tokenMint,
systemProgram: anchor.web3.SystemProgram.programId,
tokenProgram: TOKEN_PROGRAM_ID,
associatedTokenProgram: ASSOCIATED_TOKEN_PROGRAM_ID,
rent: anchor.web3.SYSVAR_RENT_PUBKEY,
})
.transaction();
case "v2":
txn = await this.program.methods
.makeAccountImmutable2()
.accounts({
storageConfig: this.storageConfigPDA,
storageAccount: key,
owner: selectedAccount.owner1,
ownerAta,
stakeAccount,
uploader: uploader,
emissionsWallet: emissionsAta,
tokenMint: tokenMint,
systemProgram: anchor.web3.SystemProgram.programId,
tokenProgram: TOKEN_PROGRAM_ID,
associatedTokenProgram: ASSOCIATED_TOKEN_PROGRAM_ID,
rent: anchor.web3.SYSVAR_RENT_PUBKEY,
})
.transaction();
break;
}
// Set recent blockhash and fee payer
txn.recentBlockhash = (
await this.connection.getLatestBlockhash()
).blockhash;
txn.feePayer = this.wallet.publicKey;
let signedTx;
let serializedTxn;
// Sign and serialize the transaction
if (!isBrowser) {
await txn.partialSign(this.wallet.payer);
serializedTxn = txn.serialize({ requireAllSignatures: false });
} else {
signedTx = await this.wallet.signTransaction(txn);
serializedTxn = signedTx.serialize({ requireAllSignatures: false });
}
// Send the transaction to the server
const makeImmutableResponse = await fetch(
`${SHDW_DRIVE_ENDPOINT}/make-immutable`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
transaction: Buffer.from(serializedTxn.toJSON().data).toString(
"base64"
),
}),
}
);
// Handle server response
if (!makeImmutableResponse.ok) {
return Promise.reject(
new Error(`Server response status code: ${
makeImmutableResponse.status
} \n
Server response status message: ${(await makeImmutableResponse.json()).error}`)
);
}
// Return the response JSON
const responseJson = await makeImmutableResponse.json();
return Promise.resolve(responseJson);
} catch (e) {
return Promise.reject(new Error(e));
}
}
shdwDrive v1.5 is no longer maintained. Please migrate to v2 and consult the new developer guide for instructions.
get_storage_account_size (new)
make_storage_immutable (updated)
reduce_storage (updated)
refresh_stake (new)
top_up (new)
The Rust SDK is available on crates.io and Rust SDK Github
Run the following Cargo command in your project directory:
cargo add shadow-drive-sdk
Or add the following line to your Cargo.toml
shadow-drive-sdk = "0.6.1"
You can find more examples on our Github
This Rust code example demonstrates how to upload multiple files to a ShdwDrive using the shadow_drive_rust
library. It initializes a tracing subscriber, reads a keypair from a file, creates a ShdwDrive client, derives the storage account public key, reads files from a directory, creates a vector of ShadowFile
structs for upload, and finally uploads the files to the ShdwDrive.
// Example - Upload Multiple Files to ShdwDrive Using Rust
// Initialize the tracing.rs subscriber with environment filter
tracing_subscriber::fmt()
.with_env_filter("off,shadow_drive_rust=debug")
.init();
// Load keypair from file using the provided KEYPAIR_PATH
let keypair = read_keypair_file(KEYPAIR_PATH).expect("failed to load keypair at path");
// Create a new ShdwDriveClient instance with the loaded keypair and server URL
let shdw_drive_client = ShadowDriveClient::new(keypair, "https://ssc-dao.genesysgo.net");
// Derive the storage account public key using the keypair's public key
let pubkey = keypair.pubkey();
let (storage_account_key, _) =
shadow_drive_rust::derived_addresses::storage_account(&pubkey, 0);
// Read files from the "multiple_uploads" directory
let dir = tokio::fs::read_dir("multiple_uploads")
.await
.expect("failed to read multiple uploads dir");
// Create a Vec of ShadowFile structs for upload
// by iterating through the directory entries
let mut files = tokio_stream::wrappers::ReadDirStream::new(dir)
.filter(Result::is_ok)
.and_then(|entry| async move {
Ok(ShadowFile::file(
entry
.file_name()
.into_string()
.expect("failed to convert os string to regular string"),
entry.path(),
))
})
.collect::<Result<Vec<_>, _>>()
.await
.expect("failed to create shdw files for dir");
// Add a ShadowFile with bytes content to the files vector
files.push(ShadowFile::bytes(
String::from("buf.txt"),
&b"this is a buf test"[..],
));
// Upload the files to the ShdwDrive using the storage_account_key
let upload_results = shdw_drive_client
.upload_multiple_files(&storage_account_key, files)
.await
.expect("failed to upload files");
//profit
println!("upload results: {:#?}", upload_results);
add_immutable_storage
Adds storage capacity to the specified immutable StorageAccount
. This will fail if the StorageAccount
is not immutable.
storage_account_key
- The public key of the immutable StorageAccount
.
size
- The additional amount of storage you want to add. E.g if you have an existing StorageAccount with 1MB of storage but you need 2MB total, size should equal 1MB. When specifying size, only KB, MB, and GB storage units are currently supported.
add_immutable_storage
let add_immutable_storage_response = shdw_drive_client
.add_immutable_storage(storage_account_key, Byte::from_str("1MB").expect("invalid byte string"))
.await?;
add_immutable_storage
{
message: String,
transaction_signature: String,
error: Option<String>,
}
add_storage
Adds storage capacity to the specified StorageAccount.
storage_account_key
- The public key of the StorageAccount.
size
- The additional amount of storage you want to add. E.g if you have an existing StorageAccount with 1MB of storage but you need 2MB total, size should equal 1MB. When specifying size, only KB, MB, and GB storage units are currently supported.
add_storage
let add_immutable_storage_response = shdw_drive_client
.add_immutable_storage(storage_account_key, Byte::from_str("1MB").expect("invalid byte string"))
.await?;
add_storage
{
message: String,
transaction_signature: String,
error: Option<String>,
}
cancel_delete_storage_account
Unmarks a StorageAccount for deletion from the ShdwDrive. To prevent deletion, this method must be called before the end of the Solana epoch in which delete_storage_account
is called.
storage_account_key
- The public key of the StorageAccount
that you want to unmark for deletion.
cancel_delete_storage_account
let cancel_delete_storage_account_response = shdw_drive_client
.cancel_delete_storage_account(&storage_account_key)
.await?;
cancel_delete_storage_account
{
txid: String,
}
claim_stake
Claims any available stake as a result of the reduce_storage
command. After reducing storage amount, users must wait until the end of the epoch to successfully claim their stake.
storage_account_key
- The public key of the StorageAccount that you want to claim excess stake from.
claim_stake
let claim_stake_response = shdw_drive_client
.claim_stake(&storage_account_key)
.await?;
claim_stake
{
txid: String,
}
create_storage_account
Creates a StorageAccount
on the ShdwDrive. StorageAccount
s can hold multiple files and are paid for using the SHDW token.
name
- The name of the StorageAccount
. Does not need to be unique.
size
- The amount of storage the StorageAccount
should be initialized with. When specifying size, only KB, MB, and GB storage units are currently supported.
create_storage_account
An example use case for this method can be found in the same github repository
// Rust SDK example of creating a StorageAccount using create_storage_account
async fn main() {
// Get keypair
let keypair_file: String = std::env::args()
.skip(1)
.next()
.expect("no keypair file provided");
let keypair: Keypair = read_keypair_file(keypair_file).expect("failed to read keypair file");
println!("loaded keypair");
// Initialize client
let client = ShadowDriveClient::new(keypair, SOLANA_MAINNET_RPC);
println!("initialized client");
// Create account
let response = client
.create_storage_account(
"test",
Byte::from_bytes(2_u128.pow(20)),
shadow_drive_sdk::StorageAccountVersion::V2,
).await?;
Ok(())
}
create_storage_account
{
message: String,
transaction_signature: String,
storage_account_address: String,
error: Option<String>,
}
delete_file
Marks a file for deletion from the ShdwDrive. Files marked for deletion are deleted at the end of each Solana epoch. Marking a file for deletion can be undone with cancel_delete_file, but this must be done before the end of the Solana epoch.
storage_account_key
- The public key of the StorageAccount
that contains the file.
url
- The ShdwDrive url of the file you want to mark for deletion.
delete_file
let delete_file_response = shdw_drive_client
.delete_file(&storage_account_key, url)
.await?;
An example use case for this method can be found in the same github repository
// Rust SDK example of marking a file for deletion from ShdwDrive using delete_file
async fn main() {
// Get keypair
let keypair_file: String = std::env::args()
.skip(1)
.next()
.expect("no keypair file provided");
let keypair: Keypair = read_keypair_file(keypair_file).expect("failed to read keypair file");
println!("loaded keypair");
// Initialize client
let client = ShadowDriveClient::new(keypair, SOLANA_MAINNET_RPC);
println!("initialized client");
// Create account
let response = client
.create_storage_account(
"test",
Byte::from_bytes(2_u128.pow(20)),
shadow_drive_sdk::StorageAccountVersion::V2,
)
.await
.expect("failed to create storage account");
let account = Pubkey::from_str(&response.shdw_bucket.unwrap()).unwrap();
println!("created storage account");
// Upload files
let files: Vec<ShadowFile> = vec![
ShadowFile::file("alpha.txt".to_string(), "./examples/files/alpha.txt"),
ShadowFile::file(
"not_alpha.txt".to_string(),
"./examples/files/not_alpha.txt",
),
];
let response = client
.store_files(&account, files.clone())
.await
.expect("failed to upload files");
println!("uploaded files");
for url in &response.finalized_locations {
println!(" {url}")
}
// Try editing
for file in files {
let response = client
.edit_file(&account, file)
.await
.expect("failed to upload files");
assert!(!response.finalized_location.is_empty(), "failed edit");
println!("edited file: {}", response.finalized_location);
}
// Delete files
for url in response.finalized_locations {
client
.delete_file(&account, url)
.await
.expect("failed to delete files");
}
}
delete_storage_account
This function marks a StorageAccount for deletion from the ShdwDrive. If an account is marked for deletion, all files within the account will be deleted as well. Any stake remaining in the StorageAccount will be refunded to the creator. Accounts marked for deletion are deleted at the end of each Solana epoch.
storage_account_key
- The public key of the StorageAccount that you want to mark for deletion.
delete_storage_account
This method returns success if it can successfully mark the account for deletion and refund any remaining stake in the account before the end of the current Solana epoch.
delete_storage_account
let delete_storage_account_response = shdw_drive_client
.delete_storage_account(&storage_account_key)
.await?;
An example use case for this method can be found in the same github repository on line 71.
edit_file
Replace an existing file on the ShdwDrive with the given updated file.
storage_account_key
- The public key of the StorageAccount
that contains the file.
url
- The ShdwDrive url of the file you want to replace.
data
- The updated ShadowFile
.
edit_file
let edit_file_response = shdw_drive_client
.edit_file(&storage_account_key, url, file)
.await?;
edit_file
{
finalized_location: String,
error: String,
}
Examples found in repository
File: examples/end_to_end.rs, Line 53
// Rust SDK end to end example of getting a keypair, initializing a client,
// creating an account, uploading a file, and editing the file
async fn main() {
// Get keypair
let keypair_file: String = std::env::args()
.skip(1)
.next()
.expect("no keypair file provided");
let keypair: Keypair = read_keypair_file(keypair_file).expect("failed to read keypair file");
println!("loaded keypair");
// Initialize client
let client = ShadowDriveClient::new(keypair, SOLANA_MAINNET_RPC);
println!("initialized client");
// Create account
let response = client
.create_storage_account(
"test",
Byte::from_bytes(2_u128.pow(20)),
shadow_drive_sdk::StorageAccountVersion::V2,
)
.await
.expect("failed to create storage account");
let account = Pubkey::from_str(&response.shdw_bucket.unwrap()).unwrap();
println!("created storage account");
// Upload files
let files: Vec<ShadowFile> = vec![
ShadowFile::file("alpha.txt".to_string(), "./examples/files/alpha.txt"),
ShadowFile::file(
"not_alpha.txt".to_string(),
"./examples/files/not_alpha.txt",
),
];
let response = client
.store_files(&account, files.clone())
.await
.expect("failed to upload files");
println!("uploaded files");
for url in &response.finalized_locations {
println!(" {url}")
}
// Try editing
for file in files {
let response = client
.edit_file(&account, file)
.await
.expect("failed to upload files");
}
}
get_object_data
Retrieve object data
get_storage_account
Returns the StorageAccount
associated with the pubkey provided by a user.
key
- The public key of the StorageAccount
.
get_storage_account
let storage_account = shdw_drive_client
.get_storage_account(&storage_account_key)
.await
.expect("failed to get storage account");
get_storage_account
{
storage_account: Pubkey,
reserved_bytes: u64,
current_usage: u64,
immutable: bool,
to_be_deleted: bool,
delete_request_epoch: u32,
owner_1: Pubkey,
owner_2: Pubkey,
account_counter_seed: u32,
creation_time: u32,
creation_epoch: u32,
last_fee_epoch: u32,
identifier: String,
}
get_storage_account
{
storage_account: Pubkey,
reserved_bytes: u64,
current_usage: u64,
immutable: bool,
to_be_deleted: bool,
delete_request_epoch: u32,
owner_1: Pubkey,
account_counter_seed: u32,
creation_time: u32,
creation_epoch: u32,
last_fee_epoch: u32,
identifier: String,
}
get_storage_accounts
Returns all StorageAccounts
associated with the public key provided by a user.
owner
- The public key that is the owner of all the returned StorageAccounts
.
get_storage_accounts
let storage_accounts = shdw_drive_client
.get_storage_accounts(&user_pubkey)
.await
.expect("failed to get storage account");
get_storage_accounts
{
storage_account: Pubkey,
reserved_bytes: u64,
current_usage: u64,
immutable: bool,
to_be_deleted: bool,
delete_request_epoch: u32,
owner_1: Pubkey,
owner_2: Pubkey,
account_counter_seed: u32,
creation_time: u32,
creation_epoch: u32,
last_fee_epoch: u32,
identifier: String,
}
get_storage_accounts
{
storage_account: Pubkey,
reserved_bytes: u64,
current_usage: u64,
immutable: bool,
to_be_deleted: bool,
delete_request_epoch: u32,
owner_1: Pubkey,
account_counter_seed: u32,
creation_time: u32,
creation_epoch: u32,
last_fee_epoch: u32,
identifier: String,
}
get_storage_account_size
This method is used to get the amount of storage currently used by a given storage account.
storage_account_key
- The public key of the StorageAccount
that owns the files.
get_storage_account_size
let storage_account_size = shdw_drive_client
.get_stroage_account_size(&storage_account_key)
.await?;
get_storage_account_size
{
storage_used: u64;
error: Option<String>;
}
list_objects
Gets a list of all files associated with a StorageAccount
. The output contains all of the file names as strings.
storage_account_key
- The public key of the StorageAccount
that owns the files.
list_objects
let files = shdw_drive_client
.list_objects(&storage_account_key)
.await?;
list_objects
Note: The response is a vector containing all of the file names as strings.
make_storage_immutable
Permanently locks a StorageAccount
and all contained files. After a StorageAccount
has been locked, a user will no longer be able to delete/edit files, add/reduce storage amount, or delete the StorageAccount
.
storage_account_key
- The public key of the StorageAccount
that will be made immutable.
make_storage_immutable
let make_immutable_response = shdw_drive_client
.make_storage_immutable(&storage_account_key)
.await?;
make_storage_immutable
{
message: String,
transaction_signature: String,
error: Option<String>,
}
migrate
Migrates a v1 StorageAccount to v2. This requires two separate transactions to reuse the original pubkey. To minimize chance of failure, it is recommended to call this method with a commitment level of Finalized.
storage_account_key
- The public key of the StorageAccount to be migrated.
migrate
let migrate_response = shdw_drive_client
.migrate(&storage_account_key)
.await?;
migrate
{
txid: String,
}
migrate_step_1
First transaction step that migrates a v1 StorageAccount
to v2. Consists of copying the existing account’s data into an intermediate account, and deleting the v1 StorageAccount
.
migrate_step_2
Second transaction step that migrates a v1 StorageAccount
to v2. Consists of recreating the StorageAccount
using the original pubkey, and deleting the intermediate account.
new
Creates a new ShadowDriveClient from the given Signer
and URL.
wallet
- A Signer
that for signs all transactions generated by the client. Typically this is a user’s keypair.
rpc_url
- An HTTP URL of a Solana RPC provider.
The underlying Solana RPC client is configured with 120s timeout and a commitment level of confirmed.
To customize RpcClient settings see new_with_rpc
.
new
use solana_sdk::signer::keypair::Keypair;
let wallet = Keypair::generate();
let shdw_drive = ShadowDriveClient::new(wallet, "https://ssc-dao.genesysgo.net");
Examples found in repository
examples/end_to_end.rs
(line 19)
// Rust SDK example using `new` method to create a new ShadowDriveClient
async fn main() {
// Get keypair
let keypair_file: String = std::env::args()
.skip(1)
.next()
.expect("no keypair file provided");
let keypair: Keypair = read_keypair_file(keypair_file).expect("failed to read keypair file");
println!("loaded keypair");
// Initialize client
let client = ShadowDriveClient::new(keypair, SOLANA_MAINNET_RPC);
println!("initialized client");
}
new_with_rpc
Creates a new ShadowDriveClient from the given Signer
and RpcClient
.
wallet
- A Signer
that for signs all transactions generated by the client. Typically this is a user’s keypair.
rpc_client
- A Solana RpcClient
that handles sending transactions and reading accounts from the blockchain.
Providing the RpcClient
allows customization of timeout and commitment level.
new_with_rpc
use solana_client::rpc_client::RpcClient;
use solana_sdk::signer::keypair::Keypair;
use solana_sdk::commitment_config::CommitmentConfig;
let wallet = Keypair::generate();
let solana_rpc = RpcClient::new_with_commitment("https://ssc-dao.genesysgo.net", CommitmentConfig::confirmed());
let shdw_drive = ShadowDriveClient::new_with_rpc(wallet, solana_rpc);
redeem_rent
Reclaims the Solana rent from any on-chain file accounts. Older versions of the ShdwDrive used to create accounts for uploaded files.
storage_account_key
- The public key of the StorageAccount that contained the deleted file.
file_account_key
- The public key of the File account to be closed.
redeem_rent
let redeem_rent_response = shdw_drive_client
.redeem_rent(&storage_account_key, &file_account_key)
.await?;
redeem_rent
{
message: String,
transaction_signature: String,
error: Option<String>,
}
reduce_storage
Reduces the amount of total storage available for the given StorageAccount
.
storage_account_key
- The public key of the StorageAccount
whose storage will be reduced.
size
- The amount of storage you want to remove. E.g if you have an existing StorageAccount
with 3MB of storage but you want 2MB total, size should equal 1MB. When specifying size, only KB, MB, and GB storage units are currently supported.
reduce_storage
let reduce_storage_response = shdw_drive_client
.reduce_storage(&storage_account_key, reduced_bytes)
.await?;
reduce_storage
{
message: String,
transaction_signature: String,
error: Option<String>,
}
refresh_stake
This method is used to update your storage account's stake amount. It is required to call this method after calling the `topUp` method in order for your stage account to update properly.
storage_account_key
: PublicKey
- Publickey of the Storage Account
refresh_stake
let refresh_stake = shdw_drive_client
.refresh_stake(&storage_account_key)
.await?;
refresh_stake
{
txid: string;
}
store_files
Stores files in the specified StorageAccount
.
storage_account_key
- The public key of the StorageAccount
.
data
- Vector of ShadowFile
objects representing the files that will be stored.
store_files
let files: Vec<ShadowFile> = vec![
ShadowFile::file("alpha.txt".to_string(), "./examples/files/alpha.txt"),
ShadowFile::file(
"not_alpha.txt".to_string(),
"./examples/files/not_alpha.txt",
),
];
let store_files_response = shdw_drive_client
.store_files(&storage_account_key, files)
.await?;
store_files
{
finalized_locations: Array<string>;
message: string;
upload_errors: Array<UploadError>;
}
top_up
This method is used to top up a storage account's $SHDW balance to cover any necessary fees, like mutable storage fees which are collected every epoch. It is necessary to call the `refresh_stake` method after this.
key
: PublicKey
- Publickey of the Storage Account
amount
: u64
- Amount of $SHDW to transfer to the stake account
top_up
let top_up_amount: u64 = 1000;
let top_up = shdw_drive_client
.top_up(&storage_account_key, top_up_amount)
.await?;
let refresh_stake = shdw_drive_client
.refresh_stake(&storage_account_key)
.await?;
top_up
{
txid: string;
}
use byte_unit::Byte;
use shadow_drive_rust::{
models::storage_acct::StorageAcct, ShadowDriveClient, StorageAccountVersion,
};
use solana_sdk::{
pubkey::Pubkey,
signer::{keypair::read_keypair_file, Signer},
};
use std::str::FromStr;
const KEYPAIR_PATH: &str = "/Users/dboures/.config/solana/id.json";
// Main function demonstrating the usage of ShdwDrive Rust client
#[tokio::main]
async fn main() {
//load keypair from file
let keypair = read_keypair_file(KEYPAIR_PATH).expect("failed to load keypair at path");
//create shdw drive client
let shdw_drive_client = ShadowDriveClient::new(keypair, "https://ssc-dao.genesysgo.net");
// // 1.
// create_storage_accounts(shdw_drive_client).await;
// // 2.
// let v1_pubkey = Pubkey::from_str("J4RJYandDDKxyF6V1HAdShDSbMXk78izZ2yEksqyvGmo").unwrap();
let v2_pubkey = Pubkey::from_str("9dXUV4BEKWohSRDn4cy5G7JkhUDWoSUGGwJngrSg453r").unwrap();
// make_storage_immutable(&shdw_drive_client, &v1_pubkey).await;
// make_storage_immutable(&shdw_drive_client, &v2_pubkey).await;
// // 3.
// add_immutable_storage_test(&shdw_drive_client, &v1_pubkey).await;
add_immutable_storage_test(&shdw_drive_client, &v2_pubkey).await;
}
// Function to create storage accounts with specified version and size
async fn create_storage_accounts<T: Signer>(shdw_drive_client: ShadowDriveClient<T>) {
let result_v1 = shdw_drive_client
.create_storage_account(
"shdw-drive-1.5-test-v1",
Byte::from_str("1MB").expect("invalid byte string"),
StorageAccountVersion::v1(),
)
.await
.expect("error creating storage account");
// Create a storage account with version 2
let result_v2 = shdw_drive_client
.create_storage_account(
"shdw-drive-1.5-test-v2",
Byte::from_str("1MB").expect("invalid byte string"),
StorageAccountVersion::v2(),
)
.await
.expect("error creating storage account");
println!("v1: {:?}", result_v1);
println!("v2: {:?}", result_v2);
}
// Function to make a storage account immutable
async fn make_storage_immutable<T: Signer>(
shdw_drive_client: &ShadowDriveClient<T>,
storage_account_key: &Pubkey,
) {
let storage_account = shdw_drive_client
.get_storage_account(storage_account_key)
.await
.expect("failed to get storage account");
match storage_account {
StorageAcct::V1(storage_account) => println!("account: {:?}", storage_account),
StorageAcct::V2(storage_account) => println!("account: {:?}", storage_account),
}
// Make the storage account immutable
let make_immutable_response = shdw_drive_client
.make_storage_immutable(&storage_account_key)
.await
.expect("failed to make storage immutable");
println!("response: {:?}", make_immutable_response);
let storage_account = shdw_drive_client
.get_storage_account(&storage_account_key)
.await
.expect("failed to get storage account");
match storage_account {
StorageAcct::V1(storage_account) => println!("account: {:?}", storage_account),
StorageAcct::V2(storage_account) => println!("account: {:?}", storage_account),
}
}
// Function to add immutable storage to a storage account
async fn add_immutable_storage_test<T: Signer>(
shdw_drive_client: &ShadowDriveClient<T>,
storage_account_key: &Pubkey,
) {
let storage_account = shdw_drive_client
.get_storage_account(&storage_account_key)
.await
.expect("failed to get storage account");
match storage_account {
StorageAcct::V1(storage_account) => {
println!("old size: {:?}", storage_account.reserved_bytes)
}
StorageAcct::V2(storage_account) => {
println!("old size: {:?}", storage_account.reserved_bytes)
}
}
// Add immutable storage to the account
let add_immutable_storage_response = shdw_drive_client
.add_immutable_storage(
storage_account_key,
Byte::from_str("1MB").expect("invalid byte string"),
)
.await
.expect("error adding storage");
println!("response: {:?}", add_immutable_storage_response);
let storage_account = shdw_drive_client
.get_storage_account(&storage_account_key)
.await
.expect("failed to get storage account");
match storage_account {
StorageAcct::V1(storage_account) => {
println!("new size: {:?}", storage_account.reserved_bytes)
}
StorageAcct::V2(storage_account) => {
println!("new size: {:?}", storage_account.reserved_bytes)
}
}
}
// Import necessary libraries and modules
use shadow_drive_rust::ShadowDriveClient;
use solana_sdk::{pubkey::Pubkey, signer::keypair::read_keypair_file};
use std::str::FromStr;
// Define the path to the keypair file
const KEYPAIR_PATH: &str = "keypair.json";
// Main function with async support
#[tokio::main]
async fn main() {
//load keypair from file
let keypair = read_keypair_file(KEYPAIR_PATH).expect("failed to load keypair at path");
let storage_account_key =
Pubkey::from_str("GHSNTDyMmay7xDjBNd9dqoHTGD3neioLk5VJg2q3fJqr").unwrap();
//create shdw drive client
let shdw_drive_client = ShadowDriveClient::new(keypair, "https://ssc-dao.genesysgo.net");
// Send a request to cancel the deletion of the storage account
let response = shdw_drive_client
.cancel_delete_storage_account(&storage_account_key)
.await
.expect("failed to cancel storage account deletion");
println!("Unmark delete storage account complete {:?}", response);
}
use shadow_drive_rust::ShadowDriveClient;
use solana_sdk::{pubkey::Pubkey, signer::keypair::read_keypair_file};
use std::str::FromStr;
const KEYPAIR_PATH: &str = "keypair.json";
/// This example doesn't quite work.
/// claim_stake is used to redeem SHDW after you reduce the storage amount of an account
/// In order to successfully claim_stake, the user needs to wait an epoch after reducing storage
/// Trying to claim_stake in the same epoch as a reduction will result in
/// "custom program error: 0x1775"
/// "Error Code: ClaimingStakeTooSoon"
#[tokio::main]
async fn main() {
//load keypair from file
let keypair = read_keypair_file(KEYPAIR_PATH).expect("failed to load keypair at path");
let storage_account_key =
Pubkey::from_str("GHSNTDyMmay7xDjBNd9dqoHTGD3neioLk5VJg2q3fJqr").unwrap();
//create shdw drive client
let shdw_drive_client = ShadowDriveClient::new(keypair, "https://ssc-dao.genesysgo.net");
let url = String::from(
"https://shdw-drive.genesysgo.net/B7Qk2omAvchkePhzHubCVQuVpZHcieqPQCwFxeeBZGuT/hey.txt",
);
// reduce storage
// let reduce_storage_response = shdw_drive_client
// .reduce_storage(
// storage_account_key,
// Byte::from_str("100KB").expect("invalid byte string"),
// )
// .await
// .expect("error adding storage");
// println!("txn id: {:?}", reduce_storage_response.txid);
// WAIT AN EPOCH
// claim stake
// let claim_stake_response = shdw_drive_client
// .claim_stake(storage_account_key)
// .await
// .expect("failed to claim stake");
// println!(
// "Claim stake complete {:?}",
// claim_stake_response
// );
}
// Import necessary modules and types
use shadow_drive_rust::{models::ShadowFile, ShadowDriveClient};
use solana_sdk::{pubkey::Pubkey, signer::keypair::read_keypair_file};
use std::str::FromStr;
const KEYPAIR_PATH: &str = "keypair.json";
// Main function
#[tokio::main]
async fn main() {
//load keypair from file
let keypair = read_keypair_file(KEYPAIR_PATH).expect("failed to load keypair at path");
let v1_pubkey = Pubkey::from_str("GSvvRguVTtSayF5zLQPLVTJQHQ6Fqu1Z3HSRMP8AHY9K").unwrap();
let v2_pubkey = Pubkey::from_str("2cvgcqfmMg9ioFtNf57ZqCNbuWDfB8ZSzromLS8Kkb7q").unwrap();
//create shdw drive client
let shdw_drive_client = ShadowDriveClient::new(keypair, "https://ssc-dao.genesysgo.net");
// Upload file for v1_pubkey
let v1_upload_reponse = shdw_drive_client
.store_files(
&v1_pubkey,
vec![ShadowFile::file(
String::from("example.png"),
"multiple_uploads/0.txt",
)],
)
.await
.expect("failed to upload v1 file");
println!("Upload complete {:?}", v1_upload_reponse);
// Upload file for v2_pubkey
let v2_upload_reponse = shdw_drive_client
.store_files(
&v2_pubkey,
vec![ShadowFile::file(
String::from("example.png"),
"multiple_uploads/0.txt",
)],
)
.await
.expect("failed to upload v2 file");
println!("Upload complete {:?}", v2_upload_reponse);
let v1_url = String::from(
"https://shdw-drive.genesysgo.net/GSvvRguVTtSayF5zLQPLVTJQHQ6Fqu1Z3HSRMP8AHY9K/example.png",
);
let v2_url = String::from(
"https://shdw-drive.genesysgo.net/2cvgcqfmMg9ioFtNf57ZqCNbuWDfB8ZSzromLS8Kkb7q/example.png",
);
//delete file
// Delete file for v1_pubkey
let v1_delete_file_response = shdw_drive_client
.delete_file(&v1_pubkey, v1_url)
.await
.expect("failed to delete file");
println!("Delete file complete {:?}", v1_delete_file_response);
// Delete file for v2_pubkey
let v2_delete_file_response = shdw_drive_client
.delete_file(&v2_pubkey, v2_url)
.await
.expect("failed to delete file");
println!("Delete file complete {:?}", v2_delete_file_response);
}
use shadow_drive_rust::ShadowDriveClient;
use solana_sdk::{pubkey::Pubkey, signer::keypair::read_keypair_file};
use std::str::FromStr;
const KEYPAIR_PATH: &str = "keypair.json";
#[tokio::main]
async fn main() {
//load keypair from file
let keypair = read_keypair_file(KEYPAIR_PATH).expect("failed to load keypair at path");
let storage_account_key =
Pubkey::from_str("9VndNFwL7cVTshY2x5GAjKQusRCAsDU4zynYg76xTguo").unwrap();
//create shdw drive client
let shdw_drive_client = ShadowDriveClient::new(keypair, "https://ssc-dao.genesysgo.net");
// Request storage account deletion
let response = shdw_drive_client
.delete_storage_account(&storage_account_key)
.await
.expect("failed to request storage account deletion");
println!("Delete Storage Account complete {:?}", response);
}
use byte_unit::Byte;
use shadow_drive_rust::{models::ShadowFile, ShadowDriveClient, StorageAccountVersion};
use solana_sdk::{
pubkey,
pubkey::Pubkey,
signer::{keypair::read_keypair_file, Signer},
};
const KEYPAIR_PATH: &str = "keypair.json";
#[tokio::main]
async fn main() {
//load keypair from file
let keypair = read_keypair_file(KEYPAIR_PATH).expect("failed to load keypair at path");
// let pubkey = keypair.pubkey();
// let (storage_account_key, _) =
// shadow_drive_rust::derived_addresses::storage_account(&pubkey, 0);
let storage_account_key = pubkey!("G6nE9EbNgSDcvUvs67enP2Jba3exgLyStgsg8S7n9StS");
//create shdw drive client
let shdw_drive_client = ShadowDriveClient::new(keypair, "https://ssc-dao.genesysgo.net");
// get_storage_accounts_test(shdw_drive_client, &pubkey).await
// create_storage_account_v2_test(shdw_drive_client).await
upload_file_test(shdw_drive_client, &storage_account_key).await
}
async fn get_storage_accounts_test<T: Signer>(
shdw_drive_client: ShadowDriveClient<T>,
pubkey: &Pubkey,
) {
let storage_accounts = shdw_drive_client
.get_storage_accounts(pubkey)
.await
.expect("failed to get storage account");
println!("{:?}", storage_accounts);
}
async fn create_storage_account_v2_test<T: Signer>(shdw_drive_client: ShadowDriveClient<T>) {
let result = shdw_drive_client
.create_storage_account(
"shdw-drive-1.5-test",
Byte::from_str("10MB").expect("invalid byte string"),
StorageAccountVersion::v2(),
)
.await
.expect("error creating storage account");
println!("{:?}", result);
}
// async fn get_object_data_test<T: Signer>(
// shdw_drive_client: ShadowDriveClient<T>,
// location: &str,
// ) {
// let result = shdw_drive_client
// .get_object_data(location)
// .await
// .expect("error getting object data");
// println!("{:?}", result);
// }
// async fn list_objects_test<T: Signer>(
// shdw_drive_client: ShadowDriveClient<T>,
// storage_account_key: &Pubkey,
// ) {
// let objects = shdw_drive_client
// .list_objects(storage_account_key)
// .await
// .expect("failed to list objects");
// println!("objects {:?}", objects);
// }
// async fn make_storage_immutable_test<T: Signer>(
// shdw_drive_client: ShadowDriveClient<T>,
// storage_account_key: &Pubkey,
// ) {
// let storage_account = shdw_drive_client
// .get_storage_account(storage_account_key)
// .await
// .expect("failed to get storage account");
// println!(
// "identifier: {:?}; immutable: {:?}",
// storage_account.identifier, storage_account.immutable
// );
// let make_immutable_response = shdw_drive_client
// .make_storage_immutable(&storage_account_key)
// .await
// .expect("failed to make storage immutable");
// println!("txn id: {:?}", make_immutable_response.txid);
// let storage_account = shdw_drive_client
// .get_storage_account(&storage_account_key)
// .await
// .expect("failed to get storage account");
// println!(
// "identifier: {:?}; immutable: {:?}",
// storage_account.identifier, storage_account.immutable
// );
// }
// async fn add_storage_test<T: Signer>(
// shdw_drive_client: &ShadowDriveClient<T>,
// storage_account_key: &Pubkey,
// ) {
// let storage_account = shdw_drive_client
// .get_storage_account(&storage_account_key)
// .await
// .expect("failed to get storage account");
// let add_storage_response = shdw_drive_client
// .add_storage(
// storage_account_key,
// Byte::from_str("10MB").expect("invalid byte string"),
// )
// .await
// .expect("error adding storage");
// println!("txn id: {:?}", add_storage_response.txid);
// let storage_account = shdw_drive_client
// .get_storage_account(&storage_account_key)
// .await
// .expect("failed to get storage account");
// println!("new size: {:?}", storage_account.storage);
// }
// async fn reduce_storage_test<T: Signer>(
// shdw_drive_client: ShadowDriveClient<T>,
// storage_account_key: &Pubkey,
// ) {
// let storage_account = shdw_drive_client
// .get_storage_account(storage_account_key)
// .await
// .expect("failed to get storage account");
// println!("previous size: {:?}", storage_account.storage);
// let add_storage_response = shdw_drive_client
// .reduce_storage(
// storage_account_key,
// Byte::from_str("10MB").expect("invalid byte string"),
// )
// .await
// .expect("error adding storage");
// println!("txn id: {:?}", add_storage_response.txid);
// let storage_account = shdw_drive_client
// .get_storage_account(storage_account_key)
// .await
// .expect("failed to get storage account");
// println!("new size: {:?}", storage_account.storage);
// }
async fn upload_file_test<T: Signer>(
shdw_drive_client: ShadowDriveClient<T>,
storage_account_key: &Pubkey,
) {
let upload_reponse = shdw_drive_client
.store_files(
storage_account_key,
vec![ShadowFile::file(String::from("example.png"), "example.png")],
)
.await
.expect("failed to upload file");
println!("Upload complete {:?}", upload_reponse);
}
use byte_unit::Byte;
use shadow_drive_rust::{ShadowDriveClient, StorageAccountVersion};
use solana_sdk::{pubkey::Pubkey, signer::keypair::read_keypair_file};
use std::str::FromStr;
const KEYPAIR_PATH: &str = "keypair.json";
// Main function to demonstrate creating and migrating a storage account
// using ShadowDriveClient
#[tokio::main]
async fn main() {
//load keypair from file
let keypair = read_keypair_file(KEYPAIR_PATH).expect("failed to load keypair at path");
//create shdw drive client
let shdw_drive_client = ShadowDriveClient::new(keypair, "https://ssc-dao.genesysgo.net");
// create V1 storage account
let v1_response = shdw_drive_client
.create_storage_account(
"1.5-test",
Byte::from_str("1MB").expect("invalid byte string"),
StorageAccountVersion::v1(),
)
.await
.expect("error creating storage account");
println!("v1: {:?} \n", v1_response);
let key_string: String = v1_response.shdw_bucket.unwrap();
let v1_pubkey: Pubkey = Pubkey::from_str(&key_string).unwrap();
// can migrate all at once
let migrate = shdw_drive_client
.migrate(&v1_pubkey)
.await
.expect("failed to migrate");
println!("Migrated {:?} \n", migrate);
// alternatively can split migration into 2 steps (boths steps are exposed)
// // step 1
// let migrate_step_1 = shdw_drive_client
// .migrate_step_1(&v1_pubkey)
// .await
// .expect("failed to migrate v1 step 1");
// println!("Step 1 complete {:?} \n", migrate_step_1);
// // step 2
// let migrate_step_2 = shdw_drive_client
// .migrate_step_2(&v1_pubkey)
// .await
// .expect("failed to migrate v1 step 2");
// println!("Step 2 complete {:?} \n", migrate_step_2);
}
use shadow_drive_rust::ShadowDriveClient;
use solana_sdk::{pubkey::Pubkey, signer::keypair::read_keypair_file};
use std::str::FromStr;
const KEYPAIR_PATH: &str = "keypair.json";
#[tokio::main]
async fn main() {
//load keypair from file
let keypair = read_keypair_file(KEYPAIR_PATH).expect("failed to load keypair at path");
//create shdw drive client
let shdw_drive_client = ShadowDriveClient::new(keypair, "https://ssc-dao.genesysgo.net");
let storage_account_key =
Pubkey::from_str("D7Qk2omAvchkePhzHubCVQuVpZHcieqPQCwFxeeBZGuT").unwrap();
let file_account_key =
Pubkey::from_str("B41kFXqFkDhY7kHbMhEk17bP2w7QLUYU9X5tRhDLttnJ").unwrap();
let redeem_rent_response = shdw_drive_client
.redeem_rent(&storage_account_key, &file_account_key)
.await
.expect("failed to redeem_storage");
println!("Redeemed {:?} \n", redeem_rent_response);
}
use byte_unit::Byte;
use futures::TryStreamExt;
use shadow_drive_rust::{models::ShadowFile, ShadowDriveClient, StorageAccountVersion};
use solana_sdk::signer::{keypair::read_keypair_file, Signer};
use tokio_stream::StreamExt;
const KEYPAIR_PATH: &str = "keypair.json";
// Main function for uploading multiple files to a ShdwDrive storage account
#[tokio::main]
async fn main() {
tracing_subscriber::fmt()
.with_env_filter("off,shadow_drive_rust=debug")
.init();
//load keypair from file
let keypair = read_keypair_file(KEYPAIR_PATH).expect("failed to load keypair at path");
let pubkey = keypair.pubkey();
let (storage_account_key, _) =
shadow_drive_rust::derived_addresses::storage_account(&pubkey, 21);
//create ShdwDrive client
let shdw_drive_client = ShadowDriveClient::new(keypair, "https://ssc-dao.genesysgo.net");
//ensure storage account exists
if let Err(_) = shdw_drive_client
.get_storage_account(&storage_account_key)
.await
{
println!("Error finding storage account, assuming it's not created yet");
shdw_drive_client
.create_storage_account(
"shadow-drive-rust-test-2",
Byte::from_str("1MB").expect("failed to parse byte string"),
StorageAccountVersion::v2(),
)
.await
.expect("failed to create storage account");
}
// Read files from "multiple_uploads" directory
let dir = tokio::fs::read_dir("multiple_uploads")
.await
.expect("failed to read multiple uploads dir");
// Create ShadowFile objects for each file in the directory
let mut files = tokio_stream::wrappers::ReadDirStream::new(dir)
.filter(Result::is_ok)
.and_then(|entry| async move {
Ok(ShadowFile::file(
entry
.file_name()
.into_string()
.expect("failed to convert os string to regular string"),
entry.path(),
))
})
.collect::<Result<Vec<_>, _>>()
.await
.expect("failed to create shdw files for dir");
// Add a ShadowFile object with byte content
files.push(ShadowFile::bytes(
String::from("buf.txt"),
&b"this is a buf test"[..],
));
// Upload files to the storage account
let upload_results = shdw_drive_client
.store_files(&storage_account_key, files)
.await
.expect("failed to upload files");
println!("upload results: {:#?}", upload_results);
}