While investigating a customer’s problem, I found another unexpected error in making a hash paritioned table.
IQ server returned an error “Cannot add a partition or subpartition to table bar” against an “alter table… partition by hash…” statement.

[Reproduction]
1. Create a table.
CREATE TABLE bar (
      c1 bigint NOT NULL,
      c2 char(2) NULL,
      c3 date NULL,
      c4 varchar(200) NULL,
      PRIMARY KEY (c1)
) IN DBS01 ;

2. Run the alter table statement.
Alter table bar partition by hash (c2) ;

3. Error message
I. 01/06 10:18:47. 0001739559 Exception Thrown from dblib/db_alter.cxx:845, Err# 261, tid 6 origtid 6
I. 01/06 10:18:47. 0001739559    O/S Err#: 0, ErrID: 4098 (db_sqlexception); SQLCode: -1013122, SQLState: ‘QDD30’, Severity: 14
I. 01/06 10:18:47. 0001739559 [22152]: Cannot add a partition or subpartition to table bar

According to IQ manual, there are some restrictions regarding hash paritioning as below.
– You can only hash partition a base table. Attempting to partitioning a global temporary table or a local temporary table raises an error.
– You can only subpartition a hash-partitioned table by range if the the table is empty.
– You cannot add, drop, merge, or split a hash partition.
– You cannot add or drop a column from a hash partition key.

The “alter table” statement I ran does not match with any restrictions above, but failed.

So, this problem is logged CR776888 and under investigation by engineering team.

A workaround to make a hash partitioned table is,

1) Extract the table data, if needed.
2) Drop the table.
3) Recreate it with new partition key(s) within “create table …” statement.
    CREATE TABLE bar (
      c1 bigint NOT NULL,
      c2 char(2) NULL,
      c3 date NULL,
      c4 varchar(200) NULL,
      PRIMARY KEY (c1)
     ) IN DBS01
     partition by hash(c2);

4) Load the extracted data, if needed.

Please refer to the IQ manual page below explaining about the hash partitioning.
http://help.sap.com/saphelp_iq1608_iqrefso/helpdata/en/a6/136fd584f21015a0afcf632ba5877d/content.htm?frameset=/en/a8/90e66f84f210158e8f869393336af4/frameset.htm&current_toc=/en/a8/90e66f84f210158e8f869393336af4/plain.htm&node_id=32

HTH
Jerry

To report this post you need to login first.

8 Comments

You must be Logged on to comment or reply to a post.

  1. Rey Wang

    A workaround to make a hash partitioned table is,

    1) Extract the table data, if needed.

    What is the best way to extract a fact table which is 1.5TB size, with over 1.0 billion rows.

    If SAP wants customer to use it, please think it carefully. Or have a tool ready to speed this up.

    Any system proc to figure out what tables have definition of hash partition?

    (0) 
    1. Jong-kil Park Post author

      I don’t think there is a system proc to identify partitioned tables.

      Instead, we can execute a query against some system tables as below.

      =====================================================================

      select  table_name,

              case when partition_method=3 and subpartition_method is NULL then ‘HASH Partition’

                   when partition_method=3 and subpartition_method=1 then ‘HASH-Range Partition’

                   else ‘RANGE Partition’

              end partition_type

      from SYSPARTITIONSCHEME ps, systab t

      where ps.partitioned_object_id = t.object_id

      =====================================================================

      /wp-content/uploads/2016/01/query_871966.png

      Best regards,
      Jerry

      (0) 
      1. Rey Wang

        How about, 1) Extract the table data, if needed.


        We need a speedy way for bcp out binary mode data and load them back to a hash partition table here.


        Can you comment that?

        (0) 
        1. Mark Mumy

          A temp extract isn’t fast enough?

          If not, then I would look to parallel temp extracts.  I’ve worked on projects where N number of temp extracts would be run on a table each using a where clause to filter the data down.  Prior to IQ 16, it was quite easy to use the ROWID function as the numbers were closely grouped.  It’s still possible, but just know that the difference between the min and max rowid could be quite large and could lead to a process that extracts very small sets of data to files.

          I used to use a process that would take the min/max difference and divide it by N where N was the number of streams.  This would give you the number of rows to STEP each extract through. Then simply put the ROWID ranges into each query:

               stream 1: select * from table where ROWID( table ) between MIN and MIN+STEP

               stream 2: select * from table where ROWID( table ) between PREV_MAX+1 and PREV_MAX+STEP

          Just continue increasing the min and max values of the between until your max is at or above the max rowid in the table.

          Mark

          (0) 
          1. Rey Wang

            For a large data warehouse, can SAP vendor provide a utility to resolve this, parallel is needed? If you ask customer to work around with rowid, is it overkill or a job security here for IQ DBA?

            A temp extract for 1.4TB data, can take a week with serialize unload; and might be needed over 3-4 TB disk spaces to store the data files. An gzip option should be here as well.

            (0) 
            1. Mark Mumy

              I would recommend opening a feature request for IQ and HANA DT for this.  Yes, it is needed for sites where large amounts of data must be moved around.  You may also find that opening a feature request for this in Data Services is best, too. 

              Mark

              (0) 
        1. Jong-kil Park Post author

          Hi,

          The information about the partition key columns is stored in a system table SYSPARTITIONKEY.

          If you have created a hash range partitioned table, the subpartition key information can be seen in another system table SYSSUBPARTITIONKEY.

          Best regards,

          Jerry

          (0) 

Leave a Reply