Class name: ReplicaQueryRuleConfiguration
Attributes:
Name | DataType | Description |
---|---|---|
dataSources (+) | Collection<ReplicaQueryDataSourceRuleConfiguration> | Data sources of primary and replicas |
loadBalancers (*) | Map<String, ShardingSphereAlgorithmConfiguration> | Load balance algorithm name and configurations of replica data sources |
Class name: ReplicaQueryDataSourceRuleConfiguration
Attributes:
Name | DataType | Description | Default Value |
---|---|---|---|
name | String | Replica query data source name | - |
primaryDataSourceName | String | Primary sources source name | - |
replicaDataSourceNames (+) | Collection<String> | Replica sources source name list | - |
loadBalancerName (?) | String | Load balance algorithm name of replica sources | Round robin load balance algorithm |
Please refer to Built-in Load Balance Algorithm List for more details about type of algorithm.
Name | DataType | Explanation |
---|---|---|
dataSourceMap | Map<String, DataSource> | Mapping of data source and its name |
masterSlaveRuleConfig | MasterSlaveRuleConfiguration | Master slave rule configuration |
props (?) | Properties | Property configurations |
Name | DataType | Explanation |
---|---|---|
name | String | Readwrite-splitting data source name |
masterDataSourceName | String | Master database source name |
slaveDataSourceNames | Collection |
Slave database source name list |
loadBalanceAlgorithm (?) | MasterSlaveLoadBalanceAlgorithm | Slave database load balance |
Property configuration items, can be of the following properties.
Name | Data Type | Explanation |
---|---|---|
sql.show (?) | boolean | Print SQL parse and rewrite log or not, default value: false |
executor.size (?) | int | Be used in work thread number implemented by SQL; no limits if it is 0. default value: 0 |
max.connections.size.per.query (?) | int | The maximum connection number allocated by each query of each physical database, default value: 1 |
check.table.metadata.enabled (?) | boolean | Check meta-data consistency or not in initialization, default value: false |
Name | DataType | Description |
---|---|---|
dataSourceMap | Map<String, DataSource> | Map of data sources and their names |
masterSlaveRuleConfig | MasterSlaveRuleConfiguration | Master slave rule configuration |
configMap (?) | Map<String, Object> | Config map |
props (?) | Properties | Properties |
Name | DataType | Description |
---|---|---|
name | String | Name of master slave data source |
masterDataSourceName | String | Name of master data source |
slaveDataSourceNames | Collection |
Names of Slave data sources |
loadBalanceAlgorithm (?) | MasterSlaveLoadBalanceAlgorithm | Load balance algorithm |
User-defined arguments.
Enumeration of properties.
Name | DataType | Description |
---|---|---|
sql.show (?) | boolean | To show SQLS or not, default value: false |
executor.size (?) | int | The number of working threads, default value: CPU count |
max.connections.size.per.query (?) | int | Max connection size for every query to every actual database. default value: 1 |
check.table.metadata.enabled (?) | boolean | Check the metadata consistency of all the tables, default value : false |
In order to relieve the pressure on the database, the write and read operations are separated into different data sources. The write library is called the master library, and the read library is called the slave library. One master library can be configured with multiple slave libraries.
// Constructing a readwrite-splitting data source, the readwrite-splitting data source implements the DataSource interface, which can be directly processed as a data source. masterDataSource, slaveDataSource0, slaveDataSource1, etc. are real data sources configured using connection pools such as DBCP
Map<String, DataSource> dataSourceMap = new HashMap<>();
dataSourceMap.put("masterDataSource", masterDataSource);
dataSourceMap.put("slaveDataSource0", slaveDataSource0);
dataSourceMap.put("slaveDataSource1", slaveDataSource1);
// Constructing readwrite-splitting configuration
MasterSlaveRuleConfiguration masterSlaveRuleConfig = new MasterSlaveRuleConfiguration();
masterSlaveRuleConfig.setName("ms_ds");
masterSlaveRuleConfig.setMasterDataSourceName("masterDataSource");
masterSlaveRuleConfig.getSlaveDataSourceNames().add("slaveDataSource0");
masterSlaveRuleConfig.getSlaveDataSourceNames().add("slaveDataSource1");
DataSource dataSource = MasterSlaveDataSourceFactory.createDataSource(dataSourceMap, masterSlaveRuleConfig);
// Constructing a readwrite-splitting data source, the readwrite-splitting data source implements the DataSource interface, which can be directly processed as a data source. masterDataSource, slaveDataSource0, slaveDataSource1, etc. are real data sources configured using connection pools such as DBCP
Map<String, DataSource> dataSourceMap = new HashMap<>();
dataSourceMap.put("masterDataSource0", masterDataSource0);
dataSourceMap.put("slaveDataSource00", slaveDataSource00);
dataSourceMap.put("slaveDataSource01", slaveDataSource01);
dataSourceMap.put("masterDataSource1", masterDataSource1);
dataSourceMap.put("slaveDataSource10", slaveDataSource10);
dataSourceMap.put("slaveDataSource11", slaveDataSource11);
// Constructing readwrite-splitting configuration
MasterSlaveRuleConfiguration masterSlaveRuleConfig0 = new MasterSlaveRuleConfiguration();
masterSlaveRuleConfig0.setName("ds_0");
masterSlaveRuleConfig0.setMasterDataSourceName("masterDataSource0");
masterSlaveRuleConfig0.getSlaveDataSourceNames().add("slaveDataSource00");
masterSlaveRuleConfig0.getSlaveDataSourceNames().add("slaveDataSource01");
MasterSlaveRuleConfiguration masterSlaveRuleConfig1 = new MasterSlaveRuleConfiguration();
masterSlaveRuleConfig1.setName("ds_1");
masterSlaveRuleConfig1.setMasterDataSourceName("masterDataSource1");
masterSlaveRuleConfig1.getSlaveDataSourceNames().add("slaveDataSource10");
masterSlaveRuleConfig1.getSlaveDataSourceNames().add("slaveDataSource11");
// Continue to create ShardingDataSource through ShardingSlaveDataSourceFactory
ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
shardingRuleConfig.getMasterSlaveRuleConfigs().add(masterSlaveRuleConfig0);
shardingRuleConfig.getMasterSlaveRuleConfigs().add(masterSlaveRuleConfig1);
DataSource dataSource = ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfig);
In order to relieve the pressure on the database, the write and read operations are separated into different data sources. The write library is called the master library, and the read library is called the slave library. One master library can be configured with multiple slave libraries.
// Constructing a readwrite-splitting data source, the readwrite-splitting data source implements the DataSource interface, which can be directly processed as a data source. masterDataSource, slaveDataSource0, slaveDataSource1, etc. are real data sources configured using connection pools such as DBCP
Map<String, DataSource> slaveDataSourceMap0 = new HashMap<>();
slaveDataSourceMap0.put("slaveDataSource00", slaveDataSource00);
slaveDataSourceMap0.put("slaveDataSource01", slaveDataSource01);
// You can choose the master-slave library load balancing strategy, the default is ROUND_ROBIN, and there is RANDOM to choose from, or customize the load strategy
DataSource masterSlaveDs0 = MasterSlaveDataSourceFactory.createDataSource("ms_0", "masterDataSource0", masterDataSource0, slaveDataSourceMap0, MasterSlaveLoadBalanceStrategyType.ROUND_ROBIN);
Map<String, DataSource> slaveDataSourceMap1 = new HashMap<>();
slaveDataSourceMap1.put("slaveDataSource10", slaveDataSource10);
slaveDataSourceMap1.put("slaveDataSource11", slaveDataSource11);
DataSource masterSlaveDs1 = MasterSlaveDataSourceFactory.createDataSource("ms_1", "masterDataSource1", masterDataSource1, slaveDataSourceMap1, MasterSlaveLoadBalanceStrategyType.ROUND_ROBIN);
// Constructing readwrite-splitting configuration
Map<String, DataSource> dataSourceMap = new HashMap<>();
dataSourceMap.put("ms_0", masterSlaveDs0);
dataSourceMap.put("ms_1", masterSlaveDs1);
// Continue to create ShardingDataSource through ShardingSlaveDataSourceFactory