class Google::Apis::DataprocV1beta2::ClusterConfig
The cluster config.
Attributes
Autoscaling Policy
config associated with the cluster. Corresponds to the JSON property `autoscalingConfig` @return [Google::Apis::DataprocV1beta2::AutoscalingConfig]
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per- location bucket (see Dataproc staging bucket (cloud.google.com/ dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket. Corresponds to the JSON property `configBucket` @return [String]
Encryption settings for the cluster. Corresponds to the JSON property `encryptionConfig` @return [Google::Apis::DataprocV1beta2::EncryptionConfig]
Endpoint config for this cluster Corresponds to the JSON property `endpointConfig` @return [Google::Apis::DataprocV1beta2::EndpointConfig]
Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster. Corresponds to the JSON property `gceClusterConfig` @return [Google::Apis::DataprocV1beta2::GceClusterConfig]
The GKE config for this cluster. Corresponds to the JSON property `gkeClusterConfig` @return [Google::Apis::DataprocV1beta2::GkeClusterConfig]
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor: Google
metadata/computeMetadata/v1beta2/instance/attributes/dataproc- role) if [[ “$`ROLE`” == 'Master' ]]; then … master specific actions … else … worker specific actions … fi Corresponds to the JSON property `initializationActions` @return [Array<Google::Apis::DataprocV1beta2::NodeInitializationAction>]
Specifies the cluster auto-delete schedule configuration. Corresponds to the JSON property `lifecycleConfig` @return [Google::Apis::DataprocV1beta2::LifecycleConfig]
The config settings for Compute Engine resources in an instance group, such as a master or worker group. Corresponds to the JSON property `masterConfig` @return [Google::Apis::DataprocV1beta2::InstanceGroupConfig]
Specifies a Metastore configuration. Corresponds to the JSON property `metastoreConfig` @return [Google::Apis::DataprocV1beta2::MetastoreConfig]
The config settings for Compute Engine resources in an instance group, such as a master or worker group. Corresponds to the JSON property `secondaryWorkerConfig` @return [Google::Apis::DataprocV1beta2::InstanceGroupConfig]
Security related configuration, including encryption, Kerberos, etc. Corresponds to the JSON property `securityConfig` @return [Google::Apis::DataprocV1beta2::SecurityConfig]
Specifies the selection and config of software inside the cluster. Corresponds to the JSON property `softwareConfig` @return [Google::Apis::DataprocV1beta2::SoftwareConfig]
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per- location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket. Corresponds to the JSON property `tempBucket` @return [String]
The config settings for Compute Engine resources in an instance group, such as a master or worker group. Corresponds to the JSON property `workerConfig` @return [Google::Apis::DataprocV1beta2::InstanceGroupConfig]
Public Class Methods
# File lib/google/apis/dataproc_v1beta2/classes.rb, line 485 def initialize(**args) update!(**args) end
Public Instance Methods
Update properties of this object
# File lib/google/apis/dataproc_v1beta2/classes.rb, line 490 def update!(**args) @autoscaling_config = args[:autoscaling_config] if args.key?(:autoscaling_config) @config_bucket = args[:config_bucket] if args.key?(:config_bucket) @encryption_config = args[:encryption_config] if args.key?(:encryption_config) @endpoint_config = args[:endpoint_config] if args.key?(:endpoint_config) @gce_cluster_config = args[:gce_cluster_config] if args.key?(:gce_cluster_config) @gke_cluster_config = args[:gke_cluster_config] if args.key?(:gke_cluster_config) @initialization_actions = args[:initialization_actions] if args.key?(:initialization_actions) @lifecycle_config = args[:lifecycle_config] if args.key?(:lifecycle_config) @master_config = args[:master_config] if args.key?(:master_config) @metastore_config = args[:metastore_config] if args.key?(:metastore_config) @secondary_worker_config = args[:secondary_worker_config] if args.key?(:secondary_worker_config) @security_config = args[:security_config] if args.key?(:security_config) @software_config = args[:software_config] if args.key?(:software_config) @temp_bucket = args[:temp_bucket] if args.key?(:temp_bucket) @worker_config = args[:worker_config] if args.key?(:worker_config) end