1. How to debug SQL if it is not executed correctly in ShardingSphere?

The answer is:

Sharding-Proxy and version after Sharding-JDBC 1.5.0 supports to configure sql.show which can print the parsing of SQL, rewriting of SQL, and final routing information to the info log. The configuration of sql.show is OFF by default. To configure it ON.

2. Why do compilation errors arise when reading sources?

The answer is:

ShardingSphere uses lombok to implement minimal code. For details, please refer to lombok.

sharding-orchestration-reg module needs to perform mvn install command to generate gRPC-related Java file according to the protobuf file.

3. Why can not find xsd when using Spring namespace?

The answer is:

Deploying an XSD file to a public web address is not the requirement of the Spring namespace usage specification, but some users have such requirements, so we deploy the XSD file to the ShardingSphere website. In fact, META-INF\spring.schemas in the jar package of sharding-jdbc-spring-namespace configures the position of xsd file:META-INF\namespace\sharding.xsd and META-INF\namespace\master-slave.xsd.

4. How to solve the Error of Cloud not resolve placeholder … in string value …?

The answer is:

Inline expression identifier can use ${...} or $->{...}, but ${...} is conflict with spring placeholder of properties, so use $->{...} on spring environment is better.

5. Why does the inline expression return a floating point number as a result?

The answer is:

In Java, the division result of two integers is an integer, but in the inline expression, the division result is a floating point number by using of an integer by using Groovy syntax. To return an integer, you need to change A/B to A.intdiv(B).

6. Do you need to configure tables without Sharding into sharding rules, if you only have partial databases or tables for Sharding?

The answer is:

Yes. Because ShardingSphere combines multiple data sources into a single logical data source. Therefore, ShardingSphere can not route those tables not configured in configuration rules.

But ShardingSphere provides two other ways to simplify the configuration.

Method 1: To configure default-data-source, thus if ShardingSphere does not find correct Sharding data source for tables, it will route the tables to the default data source.

Method 2: We can configure the datasources with Sharding and without Sharding in ShardingSphere, and to use different data sources in the application to handle the case of Sharding or not.

7. Does ShardingSphere support native self-incrementing primary keys in addition to supporting distributed self-incrementing primary keys?

The answer is:

Yes. However, there are restrictions on the use of native self-increment primary keys, which means that you cannot use native self-increment primary keys as Sharding columns at the same time.

Because ShardingSphere does not know the table structure, and native self-increment primary key is not included in the original SQL, so that ShardingSphere cannot parse the key into Sharding column, resulting in SQL being routed to multiple tables. When the INSERT SQL is routed to one table, the native self-increment primary key has a value; When the INSERT SQL is routed to more than one table, it will be 0.

8. Why does an exception thrown as ClassCastException: Integer can not cast to Long when using generic Long Type with SingleKeyTableShardingAlgorithm?

The answer is:

To ensure that the type of Sharding column in the database are consistent with the type of columns in the Sharding algorithm. For example, the column type in the database is int(11) and the Sharding column is Integer. If the Sharding column is Long, the column type in the database is bigint.

9. Why is an exception thrown when the aggregate column without an alias is used in SQLSever and PostgreSQL?

The answer is:

SQLServer and PostgreSQL will rename the aggregate column without an alias. For example, the following SQL:

SELECT SUM(num), SUM(num2) FROM tablexxx;

SQLServer gets the column of empty string and (2), and PostgreSQL gets the columns of empty sum and sum(2), which will cause ShardingSphere to fail to find the corresponding column when the result is merged.

The correct SQL is:

SELECT SUM(num) AS sum_num, SUM(num2) AS sum_num2 FROM tablexxx;

10. Why does the Oracle throw an exception Order by value must implements Comparable when you use the Order By statement including Timestamp column?

The answer is:

Two solutions: 1. Configure JVM parameters “-oracle.jdbc.J2EE13Compliant=true” 2. Set System.getProperties().setProperty(“oracle.jdbc.J2EE13Compliant”, “true”) in the project initial step;

The reason is:

com.dangdang.ddframe.rdb.sharding.merger.orderby.OrderByValue#getOrderValues():

    private List<Comparable<?>> getOrderValues() throws SQLException {
        List<Comparable<?>> result = new ArrayList<>(orderByItems.size());
        for (OrderItem each : orderByItems) {
            Object value = resultSet.getObject(each.getIndex());
            Preconditions.checkState(null == value || value instanceof Comparable, "Order by value must implements Comparable");
            result.add((Comparable<?>) value);
        }
        return result;
    }

Because of using resultSet.getObject(int index),TimeStamp oracle will decide to return java.sql.TimeStamp or oralce.sql.TIMESTAMP according to oracle.jdbc.J2EE13Compliant.

More detail, please refer to the ojdb sourcecode of coracle.jdbc.driver.TimestampAccessor#getObject(int var1):

    Object getObject(int var1) throws SQLException {
        Object var2 = null;
        if(this.rowSpaceIndicator == null) {
            DatabaseError.throwSqlException(21);
        }

        if(this.rowSpaceIndicator[this.indicatorIndex + var1] != -1) {
            if(this.externalType != 0) {
                switch(this.externalType) {
                case 93:
                    return this.getTimestamp(var1);
                default:
                    DatabaseError.throwSqlException(4);
                    return null;
                }
            }

            if(this.statement.connection.j2ee13Compliant) {
                var2 = this.getTimestamp(var1);
            } else {
                var2 = this.getTIMESTAMP(var1);
            }
        }

        return var2;
    }

11. Why is the result not correct when using Proxool.

The answer is:

When using Proxool to configuration multiple data sources, should set alias for every data sources. Because of get connection on Proxool will judge alias whether exist in the pool. If alias absent, Proxool will get connection from same data source.

Key source code of ProxoolDataSource.getConnection():

    if(!ConnectionPoolManager.getInstance().isPoolExists(this.alias)) {
        this.registerPool();
    }

Read more details on alias, refer to Proxool

12. Why does ShardingSphere provide the primary key generated by the default distributed auto-increment primary strategy is not continuous, and its mantissa mostly is even?

The answer is:

ShardingSphere uses the snowflake algorithm as the default distributed self-increasing primary key strategy to decentralized compute the unique self-increment primary key. Therefore, self-increment primary key can be increasing but not sequential.

The last four bits of the primary key computed by the snowflake algorithm represent the incremental values in one millisecond. Therefore, if the concurrency of applications is not high in one millisecond, the chance of the last four being zero are high.

The problem of mantissa mostly even is solved at version 3.1.0, FIY: https://github.com/sharding-sphere/sharding-sphere/issues/1617