Amazon S3 now supports the ability to append data to an object
Amazon S3 Express One Zone now allows users to append data to existing objects, benefiting applications like log-processing and media-broadcasting, and is accessible via AWS SDK, CLI, or Mountpoint.
Read original articleAmazon S3 Express One Zone has introduced a new feature that allows users to append data to existing objects. This enhancement is particularly beneficial for applications that require continuous data input, such as log-processing and media-broadcasting applications. Previously, these applications had to store data locally before uploading the final object to S3. With the new capability, users can now directly append data to existing objects and read them immediately within S3 Express One Zone. This feature is available in all AWS Regions where the storage class is offered, and users can utilize the AWS SDK, AWS CLI, or Mountpoint for Amazon S3 (version 1.12.0 or higher) to get started. For further details, users are directed to the S3 User Guide.
- Amazon S3 Express One Zone now supports appending data to existing objects.
- This feature is useful for applications that continuously receive data, like log-processing and media-broadcasting.
- Users can append data directly without needing to combine it in local storage first.
- The feature is available in all AWS Regions where S3 Express One Zone is offered.
- Users can access this functionality through the AWS SDK, AWS CLI, or Mountpoint for Amazon S3.
Related
Amazon S3 now supports conditional writes
Amazon S3 has introduced conditional writes to prevent overwriting existing objects, enhancing reliability for concurrent updates in applications. This feature is free and accessible via AWS SDK, API, or CLI.
Behind AWS S3's Scale
AWS S3, launched in 2006, supports 100 million requests per second, stores 280 trillion objects, utilizes over 300 microservices, and offers strong durability and data redundancy features for cloud storage.
Using S3 presigned upload URLs with Hetzners's new object storage
Hetzner has launched S3-compatible object storage, requiring users to configure a CORS policy for presigned URL uploads. Launchway offers a SaaS starter kit with S3-compatible storage features.
SST: Container Support
SST has added native support for containerized applications on AWS, introducing new components for deployment, enhancing the CLI for local development, and planning future support for more programming languages.
Amazon S3 now supports up to 1M buckets per AWS account
Amazon S3 has increased the default bucket limit to 10,000 per account, allowing requests for up to 1 million buckets, with the first 2,000 created at no cost.
- Many users appreciate the potential for real-time data processing applications, such as log processing and media workflows.
- Concerns are raised about the limitations of the feature, including the requirement to specify a write offset and the 10,000 parts limit.
- Some users express disappointment over the higher costs and lower availability of the Express One Zone compared to standard S3.
- Comparisons are made with other cloud storage solutions, highlighting existing features in competitors like Google Cloud Storage and Azure.
- There are discussions about the implications of Amazon's changes on the broader ecosystem and compatibility with third-party services.
Key points:
- It's just for the "S3 Express One Zone" bucket class, which is more expensive (16c/GB/month compared to 2.3c for S3 standard tier) and less highly available, since it lives in just one availability zone
- "With each successful append operation, you create a part of the object and each object can have up to 10,000 parts. This means you can append data to an object up to 10,000 times."
That 10,000 parts limit means this isn't quite the solution for writing log files directly to S3.
https://cloud.google.com/storage/docs/composite-objects#appe...
Does anybody know if appending still has that 5TB file limit ?
I have been using azure storage append blob to store logs of long running tasks with periodic flush (see https://learn.microsoft.com/en-us/rest/api/storageservices/u...)
S3 is often used as a lowest common denominator, and a lot of the features of azure and gcs aren’t leveraged by libraries and formats that try to be cross platform so only want to expose features that are available everywhere.
If these days all object stores do append then perhaps all the data storage formats and libs can start leveraging it?
Edit: oh it’s only in one AZ
Most of them cheaper, some MUCH cheaper.
S3 has stagnated for a long time, allowing it to become a standard.
Third parties have cloned the storage service and a vast array of software is compatible. There’s drivers, there’s file transfer programs and utilities.
What does it mean that Amazon is now changing it.
Does Amazon even really own the standard any more, does it have the right to break the long standing standards?
I’m reminded of IBM when they broke compatibility of the PS/2 computers just so it could maintain dominance.
Related
Amazon S3 now supports conditional writes
Amazon S3 has introduced conditional writes to prevent overwriting existing objects, enhancing reliability for concurrent updates in applications. This feature is free and accessible via AWS SDK, API, or CLI.
Behind AWS S3's Scale
AWS S3, launched in 2006, supports 100 million requests per second, stores 280 trillion objects, utilizes over 300 microservices, and offers strong durability and data redundancy features for cloud storage.
Using S3 presigned upload URLs with Hetzners's new object storage
Hetzner has launched S3-compatible object storage, requiring users to configure a CORS policy for presigned URL uploads. Launchway offers a SaaS starter kit with S3-compatible storage features.
SST: Container Support
SST has added native support for containerized applications on AWS, introducing new components for deployment, enhancing the CLI for local development, and planning future support for more programming languages.
Amazon S3 now supports up to 1M buckets per AWS account
Amazon S3 has increased the default bucket limit to 10,000 per account, allowing requests for up to 1 million buckets, with the first 2,000 created at no cost.