Retro Eye care Haitian Deep Dark Default


For an introduction to Terraform, please refer to this link.


You can use Terraform to create a ShardingSphere high availability cluster on Huawei Cloud. The cluster architecture is shown below. More cloud providers will be supported in the near future.

The HuaweiCloud resources created are the following:

  1. One ZooKeeper instance per AZ.
  2. One Auto Scaling Group and One Auto Scaling Configuration.
  3. An intranet Network LoadBalancer for ShardingSphere Proxy Cluster.
  4. An intranet domain for applications.

Quick Start


To create a ShardingSphere Proxy highly available cluster, you need to prepare the following resources in advance:

  1. An ssh keypair used to remotely connect ECS instances.
  2. One VPC.
  3. One subnet.
  4. A SecurityGroup can release the 2888, 3888, and 2181 ports used by ZooKeeper Server.
  5. An intranet Zone.
  6. AK/SK of the Huawei Cloud account.


  1. Enter the Terraform directory, create the terraform.tfvars file according to the above prepared resources.
git clone --depth=1
cd shardingsphere-on-cloud/terraform/huawei

The terraform.tfvars sample content is as follows:

shardingsphere_proxy_version = "5.3.1"
image_id                     = ""
key_name                     = "test-tf"
flavor_id                    = "c7.large.2"
vpc_id                       = "4b9db05b-4d57-464d-a9fe-83da3de0a74c"
vip_subnet_id                = ""
subnet_ids                   = ["6d6c57ed-5284-4a7b-b0e3-0b24aa6c9552"]
security_groups              = ["f5ad3525-dc9e-482e-afde-868ee330e7a5"]
lb_listener_port             = 3307
zk_flavor_id                 = "s6.medium.2"
  1. Run the following command to set AK/SK and Region.
  1. Under the huawei directory, run the following command to deploy ShardingSphere-Proxy Cluster.
terraform init
terraform plan  -var-file=terraform.tfvars
terraform apply  -var-file=terraform.tfvars

User Manual


Name Version
huaweicloud 1.43.0


Name Source Version
zk ./modules/zk n/a


Name Type resource resource resource
huaweicloud_dns_zone.private_zone resource resource resource resource resource
huaweicloud_availability_zones.zones data source
huaweicloud_images_image.myimage data source
huaweicloud_vpc_subnet.vipnet data source


Name Description Type Default Required
flavor_id The flavor id of the ECS string n/a yes
image_id The image id string "" no
key_name the ssh keypair for remote connection string n/a yes
lb_listener_port The lb listener port string n/a yes
security_groups List of The Security group IDs list(string) [] no
shardingsphere_proxy_as_desired_number The initial expected number of ShardSphere Proxy Auto Scaling. The default value is 3 number 3 no
shardingsphere_proxy_as_healthcheck_grace_period The health check grace period for instances, in seconds number 120 no
shardingsphere_proxy_as_max_number The maximum size of ShardingSphere Proxy Auto Scaling. The default value is 6 number 6 no
shardingsphere_proxy_doamin_prefix_name The prefix name of the shardinsphere domain, the final generated name will be [prefix_name].[zone_name], the default value is proxy. string "proxy" no
shardingsphere_proxy_version The shardingsphere proxy version string n/a yes
subnet_ids List of subnets sorted by availability zone in your VPC list(string) n/a yes
vip_subnet_id The IPv4 subnet ID of the subnet where the load balancer works string "" no
vpc_id The id of your VPC string n/a yes
zk_cluster_size The Zookeeper cluster size number 3 no
zk_flavor_id The ECS instance type string n/a yes
zk_servers The Zookeeper servers list(string) [] no
zone_id The id of the private zone string "" no
zone_name The name of the private zone string "" no


Name Description
shardingsphere_domain The domain name of the ShardingSphere Proxy Cluster for use by other services
zk_node_domain The domain of zookeeper instances


By default, ZooKeeper and ShardingSphere Proxy services created using our Terraform configuration can be managed using systemd.



systemctl start zookeeper


systemctl stop zookeeper


systemctl restart zookeeper

ShardingSphere Proxy


systemctl start shardingsphere-proxy


systemctl stop shardingsphere-proxy


systemctl restart shardingsphere-proxy