Storage Resources
Storage resources are available in InsightCloudSec as the third section (tab) under the Resource landing page. These resources are related to storage functionality and include resources like volumes, snapshots, and storage containers.
Storage resources are displayed alphabetically using the InsightCloudSec normalized terminology. Hovering over an individual resource provides the CSP-specific term with the associated logo to help users confirm the displayed information. For example, a Storage Container refers to Amazon "S3", Azure's "Blob Storage Container" and Google's "Cloud Storage", etc.
For a detailed reference of this normalized terminology check out our Resource Terminology.
Some attributes may not be included in these lists
A large number of Resource Attributes are offered for the resources outlined here. Because we are continuously expanding our supported resources the attributes and details included here can not be guaranteed to include every resource or every attribute.
If you need information about the attributes of a particular resource we are happy to help get those details for you - reach out to us through the Customer Support Portal with any questions!
Backup Vault
Backup Vault
Backup vaults are containers for organizing your backups.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the backup vault resides in |
create_time | The creation time when the Backup Vault was created |
name | The name of the vault |
recovery_points | Number of recovery points |
policy | The IAM Policy of the Backup Vault in JSON format |
trusted_accounts | Any accounts this Backup Vault has a trust relationship with |
public | Boolean denoting if this Backup Vault is publicly accessible |
key_resource_id | The Resource ID of the Backup Vault's associated key |
arn | The ARN of the Backup Vault |
Big Data Snapshot
Big Data Snapshot
Big Data Snapshots are point in time backups of a Big Data Instance. An example of this type of instance would be AWS Redshift. This class inherits from TopLevelResource and has direct access to the resource’s database object.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the snapshot resides in |
snapshot_id | The provider ID of the snapshot |
name | The name of the snapshot |
instance_resource_id | The resource ID of the instance this snapshot was created from |
snapshot_type | The type of snapshot (manual vs automatic) |
state | The current lifecycle state of the snapshot |
encrypted | Denotes if the data stored on the snapshot is encrypted |
availability_zone | The zone where the snapshot lives |
create_time | The time when the snapshot creation was launched |
port | The port that the database instance listens on |
cluster_version | The version number for the cluster |
nodes | The number of nodes in this cluster |
instance_type | The type of instance this snapshot was taken on |
database_name | The name of the master database |
size | The size (in gigabytes) of the snapshot |
master_username | The master account associated with the instance |
class DivvyResource.Resources.bigdatasnapshot.BigDataSnapshot(resource_id)
Bases: DivvyResource.Resources.toplevelresource.TopLevelResource
BigData Snapshot Operations
delete(user_resource_id=None)
Delete this resource. If wrapped in a with JobQueue() block, this will queue the deletion job to the wrapped queue, otherwise it calls immediately.
get_date_created()
Retrieve the time from the provider that this resource was created (if available).
static get_db_class()
get_parent_resource_id()
static get_provider_id_field()
static get_resource_type()
get_state()
Retrieve the route state.
get_supported_actions()
handle_resource_created(user_resource_id=None, project_resource_id=None)
This should be called when a resource is created/discovered after the basic data is added to the database. This gives an opportunity for post-addition hooks (assignment to groups, alerts, etc.).
handle_resource_destroyed(user_resource_id=None)
This should be called when a resource is destroyed before the basic data is removed from the database. This gives an opportunity for pre-destruction hooks (removal from projects/groups, alerts, etc.).
handle_resource_modified(resource, *args, **kwargs)
This should be called when a resource is modified after the new data has been updated in the DB session. This gives an opportunity for post-modification hooks.
snapshot
top_level_resource = True
Cache Snapshot
Cache Snapshot
Cache Snapshots are point in time backups of a memcache instance. This class inherits from TopLevelResource and has direct access to the resource’s database object.
Attribute-Cache | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the snapshot resides in |
snapshot_id | The provider ID of the snapshot |
name | The name of the volume |
snapshot_type | The type of snapshot (manual vs automatic) |
instance_resource_id | The resource ID of the parent instance |
state | The current lifecycle state of the snapshot |
availability_zone | The zone where the snapshot lives |
create_time | The time when the snapshot creation was launched |
port | The port that the database instance listens on |
engine | The database engine that the instance was configured to use |
engine_version | The engine version |
size | The size in gigabytes of the volume |
progress | The progress of the snapshot creation |
class DivvyResource.Resources.memcachesnapshot.MemcacheSnapshot(resource_id)
Bases: DivvyResource.Resources.toplevelresource.TopLevelResource
Cache Instance Snapshot Operations
delete(user_resource_id=None)
Delete this resource. If wrapped in a with JobQueue() block, this will queue the deletion job to the wrapped queue, otherwise it calls immediately.
get_date_created()
Retrieve the time from the provider that this resource was created (if available).
static get_db_class()
get_parent_resource_id()
static get_provider_id_field()
static get_resource_type()
get_state()
Retrieve the snapshot state.
get_supported_actions()
handle_resource_created(user_resource_id=None, project_resource_id=None)
This should be called when a resource is created/discovered after the basic data is added to the database. This gives an opportunity for post-addition hooks (assignment to groups, alerts, etc).
handle_resource_destroyed(user_resource_id=None)
This should be called when a resource is destroyed before the basic data is removed from the database. This gives an opportunity for pre-destruction hooks (removal from projects/groups, alerts, etc).
handle_resource_modified(resource, *args, **kwargs)
This should be called when a resource is modified after the new data has been updated in the DB session. This gives an opportunity for post-modification hooks.
snapshot
top_level_resource = True
Cassandra Table
Cassandra Table
Cassandra Tables are managed, efficient, and reliable Apache Cassandra-based database services; for example, AWS Keyspaces.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
namespace_id | The provider-specific namespace ID value |
keyspace_name | The name of the keyspace |
table_name | The name of the table |
region_name | The region in which the table resides |
creation_time | The timestamp for when the table was created |
throughput_mode | The throughput mode for the table |
read_units | The number of read units in the table |
write_units | The number of write units in the table |
key_resource_id | The resource ID for the encryption key associated with the table |
point_in_time_recovery | Denotes whether point-in-time recovery is enabled |
ttl | Time to live (in seconds) |
comment | Description of the table |
Cloud
Cloud
Cloud Dataset
Datasets are top-level containers that are used to organize and control access to your tables and views (GCP BIgQuery Datasets). This class inherits from TopLevelResource and has direct access to the resource's database object. The following attributes are directly accessible:
Attribute | Description |
---|---|
region_name | The region that the dataset resides in |
dataset_id | The provider ID of the dataset |
name | The name of the dataset |
description | The optional description for the dataset |
table_count | The number of tables within the dataset |
total_size_bytes | The size in bytes of the dataset |
table_expiration_ms | The expiration time in ms for the dataset tables |
creation_date | The time this resource was created |
last_modified_date | The time this resource was last modified |
publicly_accessible | Denotes whether the dataset is publicly accessible |
Cloud Global Access Point
A global endpoint for routing storage container request traffic between regions.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
name | The name of the access point |
alias | Denotes the alias of the access point |
status | Status of the access point |
creation_date | The date the access point was created |
arn | The ARN associated with the access point |
bucket_count | Denotes the number of buckets associated with the access point |
policy | The policy associated with the access point |
public | Denotes if the access point allows public access |
trusted_accounts | The list of accounts that can interact with the access point |
public_access_block | The public access block of the access point |
Cold Storage
Cold Storage
Cold Storage is used for long-term storage of infrequently accessed data, such as end-of-lifecycle, compliance, or regulatory backups. An example of this type of resource is AWS Glacier.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region where the vault exists |
name | The name of the cold storage container |
arn | The Amazon Resource Name of the cold storage vault (AWS Only) |
size_in_bytes | The size in bytes |
number_of_archives | The number of archives |
last_inventory_date | The date of last inventory |
creation_date | The date the vault was created |
lock_creation_date | The date of lock creation. |
lock_expiration_date | The date current lock policy expires |
lock_state | Denotes current lock state |
lock_policy | The lock policy document (json) |
policy | The linked policy (json) |
trusted_accounts | The trusted accounts that can interact with the resource |
Data
Data
Data Analytics Workspace
Data Analytics Workspace is a storage and interactive query service that makes it easy to analyze data. An example of this type of resource is AWS Athena.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that this resource resides in |
workspace_id | The provider-specific workspace ID |
create_time | The date the workspace was created |
name | The name of the data analytics workspace |
description | The optional description associated with the data analytics workspace |
state | The state the workspace is in |
encrypted | Denotes whether or not the workpace is encrypted |
key_resource_id | The resource id of the encryption key associated with the workspace |
requester_pays | Denotes whether usage costs pass through to the requester |
metrics_enabled | Denotes whether CloudWatch metrics are enabled |
output_location | The output locaction of the results (optional |
Data Factory
Data factory is a fully managed, serverless data integration service. It includes visual integration for data sources with built-in, maintenance-free connectors, allowing for easy construction of ETL and ETL processes code-free; it also allows you to write your own code. An example of this type of resource is Azure Data Factory.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region in which the resource resides |
factory_id | The provider ID of the data factory |
name | The name of the data factory |
state | The state of the data factory (e.g. 'succeeded') |
create_time | The time the data factory was created |
encryption_type | Denotes the encryption type (e.g. 'default') |
key_resource_id | The InsightCloudSec resource ID of the encryption key used to encrypt the data factory |
public_network_access | Denotes whether the data factory is accessible to the public |
Data Lake Storage
Azure Data Lake Storage Gen1 Retired
As of February 29, 2024, Azure has retired the Data Lake Storage Gen1 service. The Data Lake Storage resource type has been disabled until InsightCloudSec is able to officially support Azure Data Lake Storage Gen2. Contact support for any questions or issues.
Data Lake Storage is a cloud analytics service where you can easily develop and run massively parallel data transformation and processing programs in U-SQL, R, Python, and .Net over petabytes of data. With no infrastructure to manage you can process data on demand and scale instantly.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the resource resides in |
storage_id | The provider ID of the data lake storage |
name | The name of the data lake storage |
state | The state of the data lake |
public_access | Denotes if the data lake is accessible to the public |
encrypted | Denotes if the data lake is encrypted at rest |
Data Stream
Data Stream is the transfer of data at a steady high-speed rate (AWS Kinesis). This class inherits from TopLevelResource and has direct access to the resource's database object.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
name | The name of the data stream |
region_name | The region in which the resource resides |
arn | The Amazon Resource Name of the data stream |
status | The status of the data stream |
shards | The number of shards in this data stream |
metrics | The Json string for the metrics of the data stream |
encryption | Denotes whether the data stream has server side encryption enabled |
key_resource_id | The InsightCloudSec resource ID of the encryption key used to encrypt the data stream |
retention_period | The length of time in seconds that data stream will be retained |
created_timestamp | The date the data steam was created |
tier | Denotes the pricing tier |
public_access | Denotes if the data steam is accessible to the public |
event_hubs | Denotes number of partitions (Azure specific) |
stream_mode | Denotes the current mode for the data stream |
Data Sync Task
Tasks associated with online data transfer, both between on-premises and provider storage storage devices, as well as between provider storage devices.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region in which the resource resides |
task_id | The provider ID of the data sync task |
name | The name of the data sync task |
status | The status of the data sync task |
create_time | The date and time the data sync task was created |
arn | The ARN of the data sync task |
source_location_arn | The ARN of the source location of the data sync task |
destination_location_arn | The ARN of the destination location of the data sync task |
log_group_arn | The ARN of the log group of the data sync task |
options | Options for the data sync task |
Database
Database
Database Migration Instance
An instance that uses a web service to migrate data from a source data store to a target data store. An example of this type of resource is AWS DMS Replication.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
instance_type | The provider-specific instance type identifier (optional) |
region_name | The region that this resource resides in |
instance_id | The provider-specific instance id value |
instance_flavor_resource_id | The flavor of instance used by the DB instance |
state | The state of the DB intance |
endpoint_public_address | The public IP address of the database endpoint |
endpoint_private_address | The private IP address of the database endpoint |
engine_version | The version of the database engine |
storage_size | The total size (GB) of the database |
multi_az | Denotes whether the database is configured in multiple availability zones (optional) |
create_time | The date the database migration instance was created/launched. |
encrypted | Denotes whether the database is encrypted (optional) |
arn | The Amazon Resource Name |
publicly_accessible | Denotes whether the dataset is publicly accessible |
key_resource_id | The resource ID of the key that encrypts the logs |
network_resource_id | Network resource ID that the database instance is associated with |
auto_minor_upgrades | Denotes if the Database is set to update with minor upgrades |
Database Proxy
Simplifies connection management by handling network traffic between client applications and the database. An example of this type of resource is AWS RDS Database Proxy.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the proxy instance resides in |
name | The name of the proxy instance |
engine_family | The engine family the proxy instance was configured to use |
state | The current lifecycle state of the proxy |
create_time | The timestamp for when the proxy was created |
arn | The Amazon Resource Name for the proxy |
network_resource_id | The resource_id of the Network associated with the proxy |
endpoint | Denotes the endpoint address of the proxy |
require_tls | Indicates whether the proxy requires transport layer security (TLS) |
idle_timeout | The time in seconds a client can be idle before the proxy can close it |
iam_authentication_required | Indicates whether the proxy requires IAM authentication |
debug_logging | Indicates whether debug logging is enabled for the proxy |
Database Snapshot
Database Snapshots are point-in-time backups of a database instance. This class inherits from TopLevelResource and has direct access to the resource’s database object.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the snapshot resides in |
snapshot_id | The provider ID of the snapshot |
name | The name of the snapshot |
instance_resource_id | The resource ID of the instance this snapshot was created from |
database_cluster_resource_id | The the ID of the database cluster resource |
snapshot_type | The type of snapshot (manual vs automatic) |
state | The current lifecycle state of the snapshot |
availability_zone | The zone where the snapshot resides |
create_time | The time when the snapshot creation was launched |
port | The port that the database instance listens on |
engine | The database engine that the instance was configured to use |
engine_version | The engine version |
size | The size (GB) of the volume |
progress | The progress of the snapshot creation |
master_username | The master account associated with the instance |
license | The license used by the instance |
public | Denotes if the snapshot is publicly available |
encrypted | Denotes if this file system is encrypted |
key_resource_id | The resource id of encryption key associated with snapshot |
class DivvyResource.Resources.databasesnapshot.DatabaseSnapshot(resource_id)
Bases: DivvyResource.Resources.toplevelresource.TopLevelResource
Database Snapshot Operations
delete(user_resource_id=None)
Delete this resource. If wrapped in a with JobQueue() block, this will queue the deletion job to the wrapped queue, otherwise it calls immediately.
get_date_created()
Retrieve the time from the provider that this resource was created (if available).
static get_db_class()
get_parent_resource_id()
static get_provider_id_field()
static get_resource_type()
get_state()
Retrieve the route state.
get_supported_actions()
handle_resource_created(user_resource_id=None, project_resource_id=None)
This should be called when a resource is created/discovered after the basic data is added to the database. This gives an opportunity for post-addition hooks (assignment to groups, alerts, etc.).
handle_resource_destroyed(user_resource_id=None)
This should be called when a resource is destroyed before the basic data is removed from the database. This gives an opportunity for pre-destruction hooks (removal from projects/groups, alerts, etc.).
handle_resource_modified(resource, *args, **kwargs)
This should be called when a resource is modified after the new data has been updated in the DB session. This gives an opportunity for post-modification hooks.
snapshot
top_level_resource = True
Databricks Workspace
Databricks Workspace
A Databricks Workspace is an analytics platform based on Apache Spark, that provides one-click setup, streamlined workflows, and an interactive workspace that enables collaboration between data engineers, data scientists, and machine learning engineers. An example of this type of resource is Azure Databricks Workspace.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region in which the resource resides |
workspace_id | The provider ID of the databricks workspace |
name | The name of the databricks workspace |
state | The state of the databricks workspace ('succeeded' or 'failed') |
encryption_type | Denotes the encryption type (e.g., 'default', 'cmk') |
tier | The tier of the databricks workspace (e.g., 'premium', 'standard', 'trial') |
Delivery Stream
Delivery Stream
A Delivery stream loads streaming data into data stores and analytics tools (AWS Firehose). This class inherits from TopLevelResource and has direct access to the resource's database object.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
name | The name of the delivery stream |
region_name | The region that the resource resides in |
arn | The Amazon Resource Name of the delivery stream |
delivery_stream_type | The type of this delivery stream |
source_stream_arn | The ARN of source data stream |
status | The status of this delivery stream |
destinations | The Json string of destinations of this delivery stream |
version_id | The version of delivery stream |
updated_timestamp | The time the delivery stream was last updated |
created_timestamp | The time the delivery stream was created |
s3_destination | The storage container destination for this delivery stream |
trusted_accounts | The list of outside accounts receiving delivery stream data |
Elastic Cluster
Elastic Cluster
A database cluster that allows you to scale your workload's throughput.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The name of the region in which the cluster resides |
name | The name of the cluster |
creation_time | The time the cluster was created |
state | The status of the cluster |
admin_username | The admin username for the cluster |
auth_type | The authentication type for the cluster |
key_resource_id | The ID of the encryption key associated with the cluster |
arn | The ARN associated with the cluster |
shard_capacity | The shard capacity for the cluster |
shard_count | The count of shards within the cluster |
relationships | A list of resources associated with the cluster |
ETL
ETL
ETL Connection
Extract, transform, load (ETL) connection is an object that stores login and access information for a data store that can be reused to load ETL jobs.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region in which the connection resides |
name | The name of the connection |
connection_properties | Key-value pairs representing the properties of the connection |
connection_type | The type of connection |
description | The description for the connection |
match_criteria | A list of criteria that can be used for selecting the connection |
physical_connection_requirements | A map of physical connection requirements, such as VPC and Security Group |
namespace_id | The unique composite ID of the provider ID for the resource |
ETL Crawler
An ETL Crawler processes data schemas found in a given data store and creates metadata tables within a dat catalog for the schemas.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the crawler resides in |
name | The name of the crawler |
configuration | The general configuration for the crawler |
crawler_security_configuration | The security configuration for the crawler |
database_name | The name of the database that will store the crawler's output |
description | A description of the crawler |
recrawl_policy | A policy that specified whether to crawl the entire dataset again or only added folders |
role | The role that is used to access the related resources |
schema_change_policy | The policy that specifies update and delete behaviors for the crawler |
table_prefix | The prefix added to the names of tables that are created |
targets | The number of targets to crawl |
namespace_id | The unique composite ID of the provider ID for the resource |
ETL Data Catalog
ETL Data Catalog is an index to the location, schema, and runtime metrics of your data; supports extract, transform, and load (ETL) service.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the resource resides in |
name | The provider name for this resource |
metadata_encryption | Boolean denoting if metadata encryption is enabled for this resource |
metadata_key_resource_id | The resource_id of the metadata key, if present |
password_encryption | Boolean denoting if password encryption is enabled for this resource |
password_key_resource_id | The resource_id of the password key, if present |
policy | The IAM policy of the resource in JSON format |
trusted_accounts | The numbers of any accounts with a trust relationship with this resource |
ETL Database
Extract, transform, load (ETL) databases are used to organize metadata for holistic ETL services.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The name of the region in which the ETL database resides |
name | The name of the database |
create_time | The time the database was created |
location_uri | The location URI of the database |
description | The description for the database |
table_count | The number of tables defined within the database |
permissions | A list of permissions for the database |
tables | A list of the tables defined within the database |
parameters | The parameters of the database |
namespace_id | The unique composite ID of the provider ID for the resource |
ETL Job
An ETL job is an individual extract, transform, and load job from given source data to a data target.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region the job is located in |
name | The name of the job |
command | The code that executes the job |
connections | The connections used for the job |
description | A description of the job |
execution_class | Indicates whether the job is run with a standard or flexible execution class |
execution_property | The maximum number of concurrent runs that are allowed for this job |
glue_version | Determines the version of Apache Spark and Python that the Job supports |
max_capacity | The number of job data processing units (DPUs) that can be allocated |
max_retries | The maximum number of times to retry this job after a job instance fails |
non_overridable_arguments | Non-overridable arguments for this job |
number_of_workers | The number of workers of a defined worker type that are allocated when a job runs |
role_resource_id | The resource ID for the role associated with the job |
security_configuration | The security configuration for the job |
timeout | The job timeout in minutes |
worker_type | The type of predefined worker that is allocated when a job runs |
namespace_id | The unique composite ID of the provider ID for the resource |
ETL Security Configuration
This resource is a set of security properties that can be used by your extract, transform, and load (ETL) service.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the resource resides in |
name | The provider name for this resource |
encryption | Boolean denoting whether encryption is enabled for this resource |
key_resource_id | The resource_id of the encryption key, if present |
job_encryption | Boolean denoting whether job encryption is enabled for this resource |
job_key_resource_id | The resource_id of the job encryption key, if present |
log_encryption | Boolean denoting whether log encryption is enabled for this resource |
log_key_resource_id | The resource_id of the log encryption key, if present |
GraphQL API
GraphQL API
GraphQL manages services that improve performance, support real-time updates, and make connecting to secure datasources easy. An example of this type of resource is AWS AppSync API.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that this resource resides in |
api_id | The unique ID for the GraphQL API |
name | The name of the GraphQL API |
arn | The Amazon Resource Name for the GraphQL API |
xray_enabled | Boolean denoting if X-Ray tracing is enabled for the GraphQL API |
web_acl_id | The unique ID for the web ACL associated with the GraphQL API |
authentication_type | The authentication type for the GraphQL API |
log_config | The Amazon CloudWatch Logs configuration for the GraphQL API |
user_pool_config | The Amazon Cognito user pool configuration for the GraphQL API |
open_id_config | The OpenID Connect configuration for the GraphQL API |
api_caching_behavior | The API caching behavior enabled for the GraphQL API |
api_caching_instance_type | The type of API caching instance enabled |
api_caching_rest_encryption | Boolean denoting if the API caching instance is encrypted at rest |
api_caching_transit_encryption | Boolean denoting if the API caching instance is encrypted when connecting |
Recycle Bin Rule
Recycle Bin Rule
A Recycle Bin Rule assists in preventing accidental deletion of snapshots using custom retention rules and recovery.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that this resource resides in |
rule_id | The ID for the recycle bin rule |
name | The name of the recycle bin rule |
description | A description for the recycle bin rule |
arn | The Amazon Resource Name of this resource |
retention_period | The length of time a resource is retained (in days) |
rule_resource_type | The resource the rule applies to |
apply_to_all_resources | Denotes if the rule applies to all resource types |
resource_tags | Resource tags associated with the rule |
Secure File Transfer
Secure File Transfer
Secure File Transfer is a fully managed service that enables secure transfer of files and storage.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that this resource resides in |
arn | The Amazon Resource Name of this resource |
name | The name of the secure file transfer resource |
state | The state number of the server |
endpoint_type | Denotes the endpoint type of the SFTP server |
vpc_endpoint | Denotes the endpoint address of the servers |
vpc_endpoint_resource_id | The resource ID of the associated VPC endpoint |
identity_provider | The identity provider of the servers |
hostname | Denotes the custom hostname of the server |
dns_zone_resource_id | The resource ID of the DNS zone associated with the hostname |
identity_url | The url of the identity provider |
logging_role_name | The logging role for server |
logging_role_resource_id | The resource ID of the role associated with the server |
invocation_role_name | The name of the associated invocation role |
invocation_role_resource_id | The invocation role resource ID |
user_count | The current number of users |
users | A list containing information about the users associated with the server |
protocols | The protocols associated with the server |
security_policy | The security policy associated with the server |
Snapshot
Snapshot
Snapshots are point in time backups of a volume. This class inherits from TopLevelResource and has direct access to the resource’s database object.
Attribute | Description |
---|---|
organization_service_id | The ID of the parent organization service (cloud) |
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters. |
region_name | The region the snapshot resides in |
snapshot_id | The provider ID of the snapshot |
volume_resource_id | The resource ID of the volume this snapshot was created from |
name | The name of the volume’s snapshot |
description | Description of the snapshot |
state | The current lifecycle state of the snapshot |
progress | The creation progress of the snapshot |
size | The size in gigabytes of the volume |
public | Denotes whether the snapshot is publicly available |
start_time | The time the snapshot was started |
create_time | The time when the snapshot finished creating |
encrypted | Denotes whether the snapshot is encrypted |
key_resource_id | The provider ID of the key used for the snapshot |
class DivvyResource.Resources.snapshot.Snapshot(resource_id)
Bases: DivvyResource.Resources.toplevelresource.TopLevelResource
Snapshot Operations
delete(user_resource_id=None)
Delete this resource. If wrapped with JobQueue() block, this will queue the deletion job to the wrapped queue, otherwise it calls immediately.
get_date_created()
Retrieve the time from the provider that this resource was created (if available).
static get_db_class()
get_parent_resource_id()
get_private_images()
Retrieve a list of db objects for private images created from the snapshot (if any).
static get_provider_id_field()
get_resource_dependencies()
Retrieve the dependencies for a particular resources. This is an override of the parent function because we don’t have ResourceLink relationships for volumes and private/public images where the snapshot ID is included in the block device mapping.
static get_resource_type()
get_size()
Retrieve the size of the snapshot.
get_supported_actions()
Retrieve all the actions which are supported by this resource.
handle_resource_created(user_resource_id=None)
This should be called when a resource is created/discovered after the basic data is added to the database. This gives an opportunity for post-addition hooks (assignment to groups, alerts, etc).
handle_resource_destroyed(user_resource_id=None, project_resource_id=None)
This should be called when a resource is destroyed before the basic data is removed from the database. This gives an opportunity for pre-destruction hooks (removal from groups, alerts, etc).
handle_resource_modified(resource, *args, **kwargs)
This should be called when a resource is modified after the new data has been updated in the DB session. This gives an opportunity for post-modification hooks.
is_backup()
Determine if this snapshot represents a volume backup.
snapshot
snapshot_id
top_level_resource = True
Spanner
Spanner
A spanner is a globally-distributed relational database system. This class inherits from TopLevelResource and has direct access to the resource's database object.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the spanner resides in |
name | The name of the spanner |
node_count | The number of nodes the spanner has |
state | The current state of the spanner (available or in-use) |
size | The size in bytes of the spanner |
display_name | The display name of the spanner |
relationships | A list of resources associated with the spanner |
cluster_id | Unique provider ID for the cluster |
arn | ARN associated with the spanner |
engine | The engine currently running on the spanner |
engine_version | The version of the engine currently running on the spanner |
storage_encrypted | Denotes if the storage is encrypted on the spanner |
deletion_protection | Denotes if deletion protection is enabled on the spanner |
Storage
Storage
Storage Account
Currently only Azure, Storage Account contains all storage data objects: blobs, files, queues, tables and disks. This class inherits from TopLevelResource and has direct access to the resource's database object.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the resource resides in |
name | The name of the storage account |
creation_time | The date and timestamp when storage account was created |
state | The provisioning state of storage account |
access_tier | The access tier of the storage account |
primary_endpoints | The storage accounts primary endpoint |
secondary_endpoints | The storage accounts secondary endpoint |
custom_domain | Denotes if resource has custom domain configured |
blob_encrypted | Denotes whether the account has blob encryption enabled |
file_encrypted | Denotes whether the account has file encryption enabled |
queue_encrypted | Denotes whether the account has queue encryption enabled |
table_encrypted | Denotes whether the account has table encryption enabled |
transit_encryption | Denotes whether the account has transit encryption enabled |
threat_protection | Denotes whether the account has threat protection enabled |
encryption_type | Denotes the encryption type |
minimal_tls_version | The TLS version configured on the storage account |
allow_public_access | Indicates if the storage account allows public blob access |
namespace_id | The unique composite ID of the provider ID for the resource |
public | Indicates if the storage account allows public network access |
sftp_enabled | Indicates if secure file transfer protocol (SFTP) is enabled |
hns_enabled | Indicates if hierarchical namespace (HNS) is enabled |
allow_cross_tenant_replication | Indicates if cross-tenant replication is allowed |
allow_shared_key_access | Indicates if shared key access is allowed |
infrastructure_encryption | Indicates if infrastructure encryption is enabled |
change_feed_enabled | Indicates if the account has the change feed enabled |
access_keys | The access keys associated with the account |
key_policy | The policy for the access keys associated with the account |
bypass_actions | The bypass actions associated with the account |
diagnostic_settings | The diagnostic settings associated with the account |
file_soft_delete_enabled | Indicates if file soft delete is enabled for the account |
file_delete_retention_period | The file delete retention period for the account |
blob_soft_delete_enabled | Indicates if blob soft delete is enabled for the account |
blob_delete_retention_period | The blob delete retention period for the account |
container_soft_delete_enabled | Indicates if container soft delete is enabled for the account |
container_delete_retention_period | The container delete retention period for the account |
default_to_oath_authentication | Denotes whether the account defaults to using OAuth authentication |
Storage Container
Storage Containers are scalable data storage. An example of this is an Amazon S3 buckets. This class inherits from TopLevelResource and has direct access to the resource’s database object.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the storage container resides in |
name | The name of the storage container |
creation_date | The date that the the storage container was created |
updated_date | The date that the storage container was last updated |
object_count | The total number of objects within storage container |
total_size | The total size of the storage container (bytes) |
total_size_human_readable | Denotes size in bytes. |
policy | The JSON of container or user policy associated with this storage container |
trusted_accounts | The accounts with a trust relationship |
policy_encryption | Denotes whether the storage container is using policy encryption (object level) |
transit_encryption | Denotes whether the account has transit encryption enabled |
logging | Denotes whether access logging is enabled |
logging_bucket | The target bucket to store access server logs |
versioning | Denotes whether object versioning is enabled |
mfa_delete | Denotes if MFA delete is enabled |
public | Denotes whether the storage container is accessible by the public |
global_encryption | Default server side encryption for storage container |
key_resource_id | The resource id of encryption key associated with Storage Container |
storage_class | The storage class type of a container |
website | The associated website |
website_config | Specifies website configuration parameters for the bucket |
lifecycle_policy | The lifecycle policy if applies |
intelligent_tiering | Denotes if intelligent tiering is enabled for the storage container |
intelligent_tiering_config | If enabled, the intelligent tiering configuration |
public_acl | Denotes if Public ACL is applied |
public_policy | Denotes if public policy is applied |
public_access_block | The public access block of the storage container (AWS) |
impaired_visibility | Denotes whether visibility into the full configuration is impaired |
storage_account_resource_id | The Azure specific storage Account resource ID |
impaired_visibility_properties | Denotes visibility status |
object_lock_configuration | Defines the bucket's object lock configuration and rules |
bucket_replication | Denotes if bucket replication is enabled |
uniform_access | Denotes if the bucket has uniform access |
bucket_key_enabled | Denotes if the bucket key is enabled |
namespace_id | ID for the bucket's namespace |
soft_delete_retention | Defines soft delete retention protocol for the bucket |
location_type | The type of location for the bucket |
object_ownership | Defines object ownership protocol for the bucket |
blob_soft_delete_retention | Defines blob soft delete retention protocol for the bucket |
notification_configuration | Defines notification configuration for the bucket |
infrastructure_encryption | Denotes if the bucket has infastructure encryption enabled |
class DivvyResource.Resources.storagecontainer.StorageContainer(resource_id)
Bases: DivvyResource.Resources.toplevelresource.TopLevelResource
Storage Container Operations
delete(user_resource_id=None)
Delete this resource. If wrapped in a with JobQueue() block, this will queue the deletion job to the wrapped queue, otherwise it calls immediately.
get_date_created()
Retrieve the time from the provider that this resource was created (if available).
static get_db_class()
classmethod get_encrypted_status(policy)
get_merged_permissions(new_permissions, delete=False)
Build a list of current and existing permissions. This is required as the cloud providers want a full list of permissions. If you do not do this then existing permissions will be lost.
static get_provider_id_field()
static get_resource_type()
get_supported_actions()
Retrieve all the actions which are supported by this resource.
handle_resource_created(user_resource_id=None, project_resource_id=None)
This should be called when a resource is created/discovered after the basic data is added to the database. This gives an opportunity for post-addition hooks (assignment to groups, alerts, etc).
handle_resource_destroyed(user_resource_id=None)
This should be called when a resource is destroyed before the basic data is removed from the database. This gives an opportunity for pre-destruction hooks (removal from groups, alerts, etc).
handle_resource_modified(resource, *args, **kwargs)
This should be called when a resource is modified after the new data has been updated in the DB session. This gives an opportunity for post-modification hooks.
properties
This is a temporary override similar to how we set up the resource object for resource groups and other select resources. For some reason even though the DivvyDbObject
definition inherits LinkedResource_Mixin
there are select corner cases where properties is not found. After spending three hours debugging it was decided that this is the less expensive route. It fixes the bug in the current version.
storage_container
top_level_resource = True
Storage Gateway
Storage gateways securely connect on-premises software applications with cloud-based storage.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
gateway_id | The ID of the storage gateway |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region in which the storage gateway resides |
name | The name for the storage gateway |
arn | The ARN for the storage gateway |
gateway_type | The type of storage gateway |
last_software_update | The last time the storage gateway's software was updated |
deprecation_date | The date the storage gateway's software will be deprecated |
instance_resource_id | The resource ID of the instance used as the gateway |
host_environment | The type of hardware or software platform the gateway is running on |
capacity | The capacity for the storage gateway |
Storage Queues
Storage Queues store large numbers of messages that can be accessed anywhere at anytime to process work asynchronously.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region in which the service is located |
name | The name of the queue |
creation_date | The date the queue was created |
updated_date | The date the queue was last updated |
logging | Denotes whether the queue has logging enabled |
global_encryption | The server-side encryption configuration for the queue |
transit_encryption | Denotes whether the queue enforces transit encryption |
key_resource_id | The resource ID for the encryption key associated with the queue |
storage_class | The storage class for the queue |
storage_account_resource_id | The resource ID for the storage account associated with the queue |
namespace_id | The provider-specific namespace ID value |
infrastructure_encryption | Denotes whether the queue has infrastructure encryption enabled |
Storage Sync Service
Storage sync services assists with centralizing your file shares while also enabling high availability and recovery.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region in which the service is located |
namespace_id | The provider-specific namespace ID value |
service_id | The ID for the service |
name | The name for the service |
provisioning_state | The provisioning state of the service |
status | The status of the service |
private_endpoint_connections | The private endpoint connections of the service |
incoming_traffic_policy | The incoming traffic policy of the service |
last_operation_name | The last operation name of the service |
Stored Parameter
Stored Parameter
Secure storage for configuration data management and secrets management (e.g., passwords, database strings, AMIs (in AWS), IDs) as parameter values.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region in which the stored parameter resides |
name | The name of the stored parameter |
data_type | The data type of the stored parameter (e.g. String or SecureString) |
key_resource_id | The InsightCloudSec resource ID of the encryption key associated with the stored parameter |
tier | The tier of the stored parameter (e.g. Standard) |
expiration | The expiration date of the stored parameter |
last_modified | The timestamp for the last modification of the stored parameter |
Timeseries Database
Timeseries Database
Timeseries databases store and analyze trillions of events daily for internet of things (IoT) and operational applications, e.g., Amazon Timestream.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the resource resides in |
database_name | The name for the database |
arn | The ARN associated with the resource |
table_count | The number of tables within the database |
key_resource_id | The resource ID for the key used to encrypt the database. |
create_time | The timestamp when the database was created. |
last_update_time | The timestamp when the database was last updated. |
Video Stream
Video Stream
Video Stream is a service used to securely stream video from connected devices. For example, AWS for analytics, machine learning (ML), playback, and other processing. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
name | The name of the video stream |
region_name | The region that the resource resides in |
arn | The Amazon Resource Name of this resource |
version | The Application Gateway version |
media_type | The media type of the video stream |
key_resource_id | The InsightCloudSec resource id of encryption key used to encrypt this data stream" |
created_timestamp | The date and timestamp when video stream was created |
retention_period | The length of time in seconds that the video stream will be retained |
Volume
Volume
Volumes are network attached storage such as the EBS service within AWS. This class inherits from TopLevelResource and has direct access to the resource’s database object.
Attribute | Description |
---|---|
resource_id | The primary resource identifier that takes the form of a prefix followed by numbers and letters |
organization_service_id | The ID of the parent organization service (cloud) |
region_name | The region that the volume resides in |
volume_id | The provider ID of the volume |
name | The name of the volume |
instance_resource_id | The resource ID of the instance this volume is associated with |
snapshot_resource_id | The resource ID of the snapshot this volume was built from |
create_time | The timestamp of when this volume was created |
device | The device that a volume is mapped to on the instance (e.g., /dev/sdf) |
state | The current state of the volume (available or in-use) |
volume_type | The type of volume (e.g., pd-standard, gp2, premium_LRS, etc.) |
size | The size of the volume in gigabytes |
availability_zone | The availability_zone where the volume resides |
iops | The total IOPS allocated to this volume (provisioned volumes only) |
encrypted | Denotes whether the volume is encrypted |
delete_on_termination | Denotes if the volume is set to automatically delete when the parent instance is terminated |
class DivvyResource.Resources.volume.Volume(resource_id)
Bases: DivvyResource.Resources.toplevelresource.TopLevelResource
Volume Operations
delete(user_resource_id=None)
Delete this resource. If wrapped in a with JobQueue() block, this will queue the deletion job to the wrapped queue, otherwise it calls immediately.
get_attached_instance_resource_id()
Retrieve the resource id of the instance this volume is attached to, if any.
get_availability_zone()
Retrieve the availability zone/location of the resource.
get_date_created()
Retrieve the time from the provider that this resource was created (if available).
static get_db_class()
get_device()
Retrieve the attached device name of the volume (e.g., /dev/sdf).
get_parent_resource_id()
static get_provider_id_field()
static get_resource_type()
get_size()
Retrieve the size in GB of the resource.
get_snapshots()
Retrieve a list of db objects for snapshots created from the volume (if any)
get_supported_actions()
Retrieve all the actions which are supported by this resource.
get_volume_backup_scheduled_events()
Retrieve volume backup scheduled events.
get_volume_type()
Retrieve the volume type of the resource.
handle_resource_created(user_resource_id=None, project_resource_id=None)
This should be called when a resource is created/discovered after the basic data is added to the database. This gives an opportunity for post-addition hooks (assignment to projects/groups, alerts, etc).
handle_resource_destroyed(user_resource_id=None)
This should be called when a resource is destroyed before the basic data is removed from the database. This gives an opportunity for pre-destruction hooks (removal from projects/groups, alerts, etc).
handle_resource_modified(resource, *args, **kwargs)
This should be called when a resource is modified after the new data has been updated in the DB session. This gives an opportunity for post-modification hooks.
modify(iops=None, size=None, volume_type=None, user_resource_id=None)
Modify the volume. This makes a call sot he upstream providier to change one or more properties.
schedule_modification(*args, **kwargs)
Create a scheduled event to modify an existing volume. If a schedule is not supplied then the event will be scheduled to run immediately.
top_level_resource = True
volume
volume_id