As part of Microsoft Build conference (may 2020) and from early this year few
key announcements were made related to Azure Blob Storage. In this blog post
will try to cover few key announcements which are now generally available and some in
preview.
-
User Delegation SAS Tokens (Generally Available)
This feature was made GA in January 2020. Prior to User Delegation SAS
token feature, if we want to access private or secured container it's
managed through Managed Identity or Shared access signature token (SAS
tokens). SAS tokens grant specific, time-limited access to storage objects without
exposing an account access key.
A SAS secured with Azure AD credentials is called a user delegation SAS.
With user delegation SAS tokens it now supports Azure AD and RBAC, what it
means is now
lower-privileged users and services can now delegate subsets of their
access to clients, using this new type of pre-authorized URL. With that
if clients retrieve a user delegation key tied to their Azure Active Directory
(AD) account, and then use it to create SAS tokens granting a su
bset of their own access rights. This feature is generally available which means it can be used for
production workloads and available for all regions of Azure.
Note:
This feature can be used with Azure CLI version 2.0.78 or later
-
Priority retrieval from Azure Archive (Generally Available)
Priority retrieval allows you to flag the re-hydration of your data from the
offline archive tier back into an online hot or cool tier as a high priority
action.
The two archive retrieval options are
-
Standard priority which is default with retrievals taking up-to 15 hours.
-
High priority - where you need for urgent data access from archive, with
retrievals for blobs under 10 GB typically taking less than 1 hour.
-
Upload blob direct to access tier of your choice (Generally Available)
This feature enables customers to write cold data directly to Azure
Archive using PutBlob and PutBlockList API.
-
CopyBlob enhanced capabilities (Generally Available) -
The CopyBlob API now supports the archive access tier.
-
Geo-Zone-Redundant Storage (Generally Available)
Geo-Zone-Redundant Storage (GZRS) and Read-Access Geo-Zone-Redundant
Storage (RA-GZRS) are now generally available offering intra-regional and
inter-regional high availability and disaster protection for your
applications. GZRS writes three copies of your data synchronously across multiple
Azure Availability zones.
-
Account failover (Generally Available)
Customer-initiated storage account fail-over is now generally available,
allowing you to determine when to initiate a fail-over instead of waiting
for Microsoft to do so. When you perform a fail-over, the secondary
replica of the storage account becomes the new primary. Once the fail-over
is complete, you will automatically begin reading from and writing to data
from the new primary region, with no code changes
As application create, update and delete data, now we have ability to
access and manage both current and previous version of the
data. You can restore a prior version of a blob to recover your data if it is
erroneously modified or deleted.
A version captures a committed blob state at a given point in time.
When versioning is enabled for a storage account, Azure Storage
automatically creates a new version of a blob each time that blob is
modified or deleted.
- Point in time restore (In Preview)
Just like how we have Point in time restore functionality for Azure SQL,
this is one the much awaited feature for Blob storage. Point in time restore for Azure Blob Storage provides storage account
administrators the ability to restore a subset of containers or blobs
within a storage account to a previous state. Prerequisites for this point
in time restore you need to enable following features Soft delete,
Change feed, Blob versioning. Since the service is in preview its
supported in following regions Canada central, Canada east, France central
only at this stage.
As datasets get larger, finding specific related objects can be
difficult. Generally what we do is use ListBlobsAPI to retrieve records
and parse through the list. To populate the blob index, define
key-value tag attributes on your data, either on new data during upload or
on existing data in your storage account. These blob index tags are stored
alongside your underlying blob data.
Comments
Post a Comment