- 浏览: 85981 次
- 性别:
- 来自: 北京
最新评论
I recently had to sit down and think about how I would spread workloads across nodes in a RAC data warehouse configuration. The challenge was quite interesting, here were the specifications:
<!--[if !supportLists]--> <!--[endif]-->
•Being able to fire PQ slaves on different instances in order to use all available resources for most of the aggregation queries.
•Based on a tnsname connection, being able to restrict PQ access to one node because external tables may be used. Slaves running on the wrong node would fail :
insert /*+ append parallel(my_table,4) */ into my_table
ERROR at line 1:
ORA-12801: error signaled in parallel query server P001, instance node-db04:INST2 (2)
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file myfile408.12.myfile_P22.20070907.0.txt in MYDB_INBOUND_DIR not found
ORA-06512: at “SYS.ORACLE_LOADER”, line 19
I could come up with two basic configurations based on combinations of services, INSTANCE_GROUPS and PARALLEL_INSTANCE_GROUP. This post is about an asymmetric configuration, the second will deal with a symmetric configuration.
Parallel executions are services aware: the slaves inherit the data base service from the coordinator. But the coordinator may execute slaves on any instance of the cluster, which means that the service localization (‘PREFERRED’ instance) will create the coordinator on the henceforth designed instance, but the slaves are not bound by the service configuration. So I rewrote “being able to restrict PQ access to one node” into “being able to restrict PQ access and the coordinator to one node”
An instance belongs to the instance group it declares in its init.ora/spfile.ora. Each instance may have several instance groups declared: INSTANCE_GROUPS =’IG1’,’IG2’ (be careful not to set ‘IG1,IG2’). PARALLEL_INSTANCE_GROUP is another parameter which can be either configured at the system or at session level. Sessions may fire PQ slaves on instances for which the parallel instance group matches and instance group the instance belongs to.
Let’s consider a 2 nodes configuration: node1 and node2. I1 and I2 are instances respectively running on node1 and node2.
The Oracle Data Warehouse documentation guide states that a SELECT statement can be parallelized for objects schema created with a PARALLEL declaration only if the query involves either a full table table scan or an inter partition index range scan. I’ll force a full table scan in my tests case to have the CBO decide to pick up a parallel plan:
select /*+ full(orders_part)*/ count(*) from orders_part;
The configuration described below will allow users :
•<!--[if !supportLists]--> <!--[endif]-->connected to node1 to execute both the coordinator and slaves on node1, and prevent the slaves to spill on node2
•<!--[if !supportLists]--> <!--[endif]-->connected to node2 but unable to do an alter session (because the code belongs to an external provider) to load balance their queries across the nodes
•<!--[if !supportLists]--> <!--[endif]-->connected to node 1 and able to issue an alter session to load balance.
In a nutshell, users connecting to node1 will restrict their slave scope to node1, users connecting to node2 will be allowed to load balance their slaves over all the nodes.
I set for the test parallel_min_servers=0: I can then see the slaves starting whenever Oracle decides to fire them.
Asymmetric INSTANCE_GROUPS configuration:
On node1, spfileI1.ora looks like:
INSTANCE_GROUPS =’IG1’,’IG2’
PARALLEL_INSTANCE_GROUP=’IG1’
On node2, spfileI2.ora contains:
INSTANCE_GROUPS =’IG2’
PARALLEL_INSTANCE_GROUP=’IG2’
select inst_id, name, value from gv$parameter where name like ‘%instance%group%’;
INST_ID NAME VALUE
———- ——————————————————————————-
1 instance_groups IG1, IG2
1 parallel_instance_group IG1
2 instance_groups IG2
2 parallel_instance_group IG2
Single node access
I configured via dbca nodetaf1, a service for which node1/instance I1 was the preferred node and node was set to “not use” (I do not want to connect on one node an execute on another – thereby unnecessarily clogging the interconnect —) and did the same for node2/instance I2.
SQL> select inst_id,name from gv$active_services where name like ‘%taf%’;
INST_ID NAME
———- —————————————————————-
2 nodetaf2
1 nodetaf1
The related TNSNAMES entries for a node1 only access looks like:
Node1-only =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
)
(LOAD_BALANCING=NO)
(CONNECT_DATA =
(SERVICE_NAME= nodetaf1)
(SID = EMDWH1)
(SERVER = DEDICATED)
)
)
Note that the host name is not the vip address (because there is no point in switching node should node1 fail).
Test results:
$ sqlplus mylogin/mypass@node1-only
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
select qcsid, p.inst_id “Inst”, p.server_group “Group”, p.server_set “Set”,s.program
from gv$px_session p,gv$session s
where s.sid=p.sid and qcsid=&1
order by qcinst_id , p.inst_id,server_group,server_set
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
938 1 1 1 oracle@node1 (P000)
938 1 1 1 oracle@node1 (P004)
938 1 1 1 oracle@node1 (P002)
938 1 1 1 oracle@node1 (P005)
938 1 1 1 oracle@node1 (P006)
938 1 1 1 oracle@node1 (P001)
938 1 1 1 oracle@node1 (P007)
938 1 1 1 oracle@node1 (P003)
938 1 sqlplus@node1 (TNS V1-V3)
The coordinator and the slaves stay on node1
Dual node access
I then added a service aimed at firing slave executions on both nodes. The ‘bothnodestaf’ was added using dbca and then modified to give it “goal_throughput” and “clb_goal_short” load balancing advisories: according to the documentation, load balancing is based on rate that works is completed in service plus available bandwidth. I’ll dig into that one day to get a better understanding of the LB available strategies.
execute dbms_service.modify_service (service_name => ‘bothnodestaf’ -
, aq_ha_notifications => true -
, goal => dbms_service.goal_throughput -
, failover_method => dbms_service.failover_method_basic -
, failover_type => dbms_service.failover_type_select -
, failover_retries => 180 -
, failover_delay => 5 -
, clb_goal => dbms_service.clb_goal_short);
In the tnsnames.ora:
bothnodes =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = bothnodestaf)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
(SERVER = DEDICATED)
)
)
Note that LOAD_BALANCE is unset (set to OFF) in the tnsnames.ora to allow server-side connection balancing
On node1:
$ sqlplus mylogin/mypass@bothnodes
alter session set parallel_instance_group=’IG2′;
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
936 1 1 1 oracle@node1 (P002)
936 1 1 1 oracle@node1 (P003)
936 1 1 1 oracle@node1 (P001)
936 1 1 1 oracle@node1 (P000)
936 1 1 1 oracle@node2 (P037)
936 1 1 1 oracle@node2 (P027)
936 2 1 1 oracle@node1 (O001)
936 2 1 1 oracle@node2 (P016)
936 2 1 1 oracle@node2 (P019)
936 2 1 1 oracle@node2 (P017)
936 2 1 1 oracle@node2 (P018)
936 1 sqlplus@node1 (TNS V1-V3)
The slaves started on both nodes.
On node2:
$ sqlplus mylogin/mypass@bothnodes
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ——————————————————————————-
952 1 1 1 oracle@node1 (P002)
952 1 1 1 oracle@node1 (P003)
952 1 1 1 oracle@node1 (P001)
952 1 1 1 oracle@node1 (P000)
952 1 1 1 oracle@node2 (P037)
952 1 1 1 oracle@node2 (P027)
952 2 1 1 oracle@node1 (O001)
952 2 1 1 oracle@node2 (P016)
952 2 1 1 oracle@node2 (P019)
952 2 1 1 oracle@node2 (P017)
952 2 1 1 oracle@node2 (P018)
952 1 sqlplus@node1 (TNS V1-V3)
Again, connecting on a node2 services allows load balancing between the nodes
If you have not done so, it would be beneficial to first go through the first post “Strategies for parallelized queries across RAC instances” to get an understanding of the concepts and challenge. I came up with a description of an asymmetric strategy for the handling of RAC inter-instance parallelized queries but still being able to force both the slaves and the coordinator to reside on just one node. The asymmetric configuration is a connexion and service based scheme which allow users
•connected to node1 to execute both the coordinator and slaves on node1, and prevent the slaves to spill on node2
•connected to node2 but unable to do an alter session (because the code belongs to an external provider) to load balance their queries across the nodes
•connected to node 1 and able to issue an alter session to load balance.
Another way of doing things is to load balance the default service to which the applications connect to, and to restrict access to node1 and to node2. This is a symmetric configuration.
Load balancing:
Tnsnames and service names are the same as in the asymmetric configuration.
bothnodes =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = bothnodestaf)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
(SERVER = DEDICATED)
)
)
execute dbms_service.modify_service (service_name => ‘bothnodestaf’ -
, aq_ha_notifications => true -
, goal => dbms_service.goal_throughput -
, failover_method => dbms_service.failover_method_basic -
, failover_type => dbms_service.failover_type_select -
, failover_retries => 180 -
, failover_delay => 5 -
, clb_goal => dbms_service.clb_goal_short);
Both spfiles were changed:
On node1, spfileI1.ora contains:
INSTANCE_GROUPS =’IG1’,’IGCLUSTER’
PARALLEL_INSTANCE_GROUP=’IGCLUSTER’
On node2, spfileI2.ora contains:
INSTANCE_GROUPS =’IG2’, ’IGCLUSTER’
PARALLEL_INSTANCE_GROUP=’IGCLUSTER’
select inst_id, name, value from gv$parameter where name like ‘%instance%group%’;
INST_ID NAME VALUE
———- ——————————————————————————-
1 instance_groups IG1, IGCLUSTER
1 parallel_instance_group IGCLUSTER
2 instance_groups IG2, IGCLUSTER
2 parallel_instance_group IGCLUSTER
By default, both nodes will behave in the same way. As the PARALLEL_INSTANCE_GROUP matches one of the INSTANCE_GROUPS on both nodes, load balancing will work by default whatever the node on which the applications connects.
On node 1:
$sqlplus mylogin/mypass@bothnodes
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
1050 1 1 1 oracle@node1 (P010)
1050 1 1 1 oracle@node1 (P011)
1050 1 1 1 oracle@node1 (P009)
1050 1 1 1 oracle@node1 (P008)
1050 2 1 1 oracle@node2 (P009)
1050 2 1 1 oracle@node2 (P011)
1050 2 1 1 oracle@node2 (P010)
1050 2 1 1 oracle@node2 (P008)
1050 1 sqlplus@node1 (TNS V1-V3)
On node 2:
$ sqlplus mylogin/mypass@bothnodes
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
997 1 1 1 oracle@node1 (P010)
997 1 1 1 oracle@node1 (P011)
997 1 1 1 oracle@node1 (P009)
997 1 1 1 oracle@node1 (P008)
997 2 1 1 oracle@node2 (P009)
997 2 1 1 oracle@node2 (P011)
997 2 1 1 oracle@node2 (P010)
997 2 1 1 oracle@node2 (P008)
997 2 sqlplus@node2 (TNS V1-V3)
You may notice a subtle difference: the coordinator runs as expected on the node the service connects to.
Node restriction:
The tnsnames.ora and the service definition are left unchanged from the tests performed in the previous post. nodetaf1 is set to node1= ‘PREFERRED’ and node2=’NONE’
On node1:
Tnsnames.ora: (unchanged)
node1-only =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
)
(LOAD_BALANCING=NO)
(CONNECT_DATA =
(SERVICE_NAME= nodetaf1)
(SID = EMDWH1)
(SERVER = DEDICATED)
)
)
$ sqlplus mylogin/mypass@node1-only
alter session set parallel_instance_group=’IG1′;
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
984 1 1 1 oracle@node1 (P000)
984 1 1 1 oracle@node1 (P004)
984 1 1 1 oracle@node1 (P002)
984 1 1 1 oracle@node1 (P005)
984 1 1 1 oracle@node1 (P006)
984 1 1 1 oracle@node1 (P001)
984 1 1 1 oracle@node1 (P007)
984 1 1 1 oracle@node1 (P003)
984 1 sqlplus@node1 (TNS V1-V3)
On node 2:
The tnsnames.ora and service definition are symmetric to what they are on node1
$ sqlplus mylogin/mypass@node2-only
alter session set parallel_instance_group=’IG2′;
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
952 1 1 1 oracle@node2 (P000)
952 1 1 1 oracle@node2 (P004)
952 1 1 1 oracle@node2 (P002)
952 1 1 1 oracle@node2 (P005)
952 1 1 1 oracle@node2 (P006)
952 1 1 1 oracle@node2 (P001)
952 1 1 1 oracle@node2 (P007)
952 1 1 1 oracle@node2 (P003)
952 1 sqlplus@node2 (TNS V1-V3)
Comments (1)
September 12, 2007
Strategies for RAC inter-instance parallelized queries (part 1/2)
Filed under: Oracle,RAC — christianbilien @ 8:32 pm
I recently had to sit down and think about how I would spread workloads across nodes in a RAC data warehouse configuration. The challenge was quite interesting, here were the specifications:
<!--[if !supportLists]--> <!--[endif]-->
•Being able to fire PQ slaves on different instances in order to use all available resources for most of the aggregation queries.
•Based on a tnsname connection, being able to restrict PQ access to one node because external tables may be used. Slaves running on the wrong node would fail :
insert /*+ append parallel(my_table,4) */ into my_table
ERROR at line 1:
ORA-12801: error signaled in parallel query server P001, instance node-db04:INST2 (2)
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file myfile408.12.myfile_P22.20070907.0.txt in MYDB_INBOUND_DIR not found
ORA-06512: at “SYS.ORACLE_LOADER”, line 19
I could come up with two basic configurations based on combinations of services, INSTANCE_GROUPS and PARALLEL_INSTANCE_GROUP. This post is about an asymmetric configuration, the second will deal with a symmetric configuration.
Parallel executions are services aware: the slaves inherit the data base service from the coordinator. But the coordinator may execute slaves on any instance of the cluster, which means that the service localization (‘PREFERRED’ instance) will create the coordinator on the henceforth designed instance, but the slaves are not bound by the service configuration. So I rewrote “being able to restrict PQ access to one node” into “being able to restrict PQ access and the coordinator to one node”
An instance belongs to the instance group it declares in its init.ora/spfile.ora. Each instance may have several instance groups declared: INSTANCE_GROUPS =’IG1’,’IG2’ (be careful not to set ‘IG1,IG2’). PARALLEL_INSTANCE_GROUP is another parameter which can be either configured at the system or at session level. Sessions may fire PQ slaves on instances for which the parallel instance group matches and instance group the instance belongs to.
Let’s consider a 2 nodes configuration: node1 and node2. I1 and I2 are instances respectively running on node1 and node2.
The Oracle Data Warehouse documentation guide states that a SELECT statement can be parallelized for objects schema created with a PARALLEL declaration only if the query involves either a full table table scan or an inter partition index range scan. I’ll force a full table scan in my tests case to have the CBO decide to pick up a parallel plan:
select /*+ full(orders_part)*/ count(*) from orders_part;
The configuration described below will allow users :
•<!--[if !supportLists]--> <!--[endif]-->connected to node1 to execute both the coordinator and slaves on node1, and prevent the slaves to spill on node2
•<!--[if !supportLists]--> <!--[endif]-->connected to node2 but unable to do an alter session (because the code belongs to an external provider) to load balance their queries across the nodes
•<!--[if !supportLists]--> <!--[endif]-->connected to node 1 and able to issue an alter session to load balance.
In a nutshell, users connecting to node1 will restrict their slave scope to node1, users connecting to node2 will be allowed to load balance their slaves over all the nodes.
I set for the test parallel_min_servers=0: I can then see the slaves starting whenever Oracle decides to fire them.
Asymmetric INSTANCE_GROUPS configuration:
On node1, spfileI1.ora looks like:
INSTANCE_GROUPS =’IG1’,’IG2’
PARALLEL_INSTANCE_GROUP=’IG1’
On node2, spfileI2.ora contains:
INSTANCE_GROUPS =’IG2’
PARALLEL_INSTANCE_GROUP=’IG2’
select inst_id, name, value from gv$parameter where name like ‘%instance%group%’;
INST_ID NAME VALUE
———- ——————————————————————————-
1 instance_groups IG1, IG2
1 parallel_instance_group IG1
2 instance_groups IG2
2 parallel_instance_group IG2
Single node access
I configured via dbca nodetaf1, a service for which node1/instance I1 was the preferred node and node was set to “not use” (I do not want to connect on one node an execute on another – thereby unnecessarily clogging the interconnect —) and did the same for node2/instance I2.
SQL> select inst_id,name from gv$active_services where name like ‘%taf%’;
INST_ID NAME
———- —————————————————————-
2 nodetaf2
1 nodetaf1
The related TNSNAMES entries for a node1 only access looks like:
Node1-only =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
)
(LOAD_BALANCING=NO)
(CONNECT_DATA =
(SERVICE_NAME= nodetaf1)
(SID = EMDWH1)
(SERVER = DEDICATED)
)
)
Note that the host name is not the vip address (because there is no point in switching node should node1 fail).
Test results:
$ sqlplus mylogin/mypass@node1-only
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
select qcsid, p.inst_id “Inst”, p.server_group “Group”, p.server_set “Set”,s.program
from gv$px_session p,gv$session s
where s.sid=p.sid and qcsid=&1
order by qcinst_id , p.inst_id,server_group,server_set
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
938 1 1 1 oracle@node1 (P000)
938 1 1 1 oracle@node1 (P004)
938 1 1 1 oracle@node1 (P002)
938 1 1 1 oracle@node1 (P005)
938 1 1 1 oracle@node1 (P006)
938 1 1 1 oracle@node1 (P001)
938 1 1 1 oracle@node1 (P007)
938 1 1 1 oracle@node1 (P003)
938 1 sqlplus@node1 (TNS V1-V3)
The coordinator and the slaves stay on node1
Dual node access
I then added a service aimed at firing slave executions on both nodes. The ‘bothnodestaf’ was added using dbca and then modified to give it “goal_throughput” and “clb_goal_short” load balancing advisories: according to the documentation, load balancing is based on rate that works is completed in service plus available bandwidth. I’ll dig into that one day to get a better understanding of the LB available strategies.
execute dbms_service.modify_service (service_name => ‘bothnodestaf’ -
, aq_ha_notifications => true -
, goal => dbms_service.goal_throughput -
, failover_method => dbms_service.failover_method_basic -
, failover_type => dbms_service.failover_type_select -
, failover_retries => 180 -
, failover_delay => 5 -
, clb_goal => dbms_service.clb_goal_short);
In the tnsnames.ora:
bothnodes =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = bothnodestaf)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
(SERVER = DEDICATED)
)
)
Note that LOAD_BALANCE is unset (set to OFF) in the tnsnames.ora to allow server-side connection balancing
On node1:
$ sqlplus mylogin/mypass@bothnodes
alter session set parallel_instance_group=’IG2′;
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
936 1 1 1 oracle@node1 (P002)
936 1 1 1 oracle@node1 (P003)
936 1 1 1 oracle@node1 (P001)
936 1 1 1 oracle@node1 (P000)
936 1 1 1 oracle@node2 (P037)
936 1 1 1 oracle@node2 (P027)
936 2 1 1 oracle@node1 (O001)
936 2 1 1 oracle@node2 (P016)
936 2 1 1 oracle@node2 (P019)
936 2 1 1 oracle@node2 (P017)
936 2 1 1 oracle@node2 (P018)
936 1 sqlplus@node1 (TNS V1-V3)
The slaves started on both nodes.
On node2:
$ sqlplus mylogin/mypass@bothnodes
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ——————————————————————————-
952 1 1 1 oracle@node1 (P002)
952 1 1 1 oracle@node1 (P003)
952 1 1 1 oracle@node1 (P001)
952 1 1 1 oracle@node1 (P000)
952 1 1 1 oracle@node2 (P037)
952 1 1 1 oracle@node2 (P027)
952 2 1 1 oracle@node1 (O001)
952 2 1 1 oracle@node2 (P016)
952 2 1 1 oracle@node2 (P019)
952 2 1 1 oracle@node2 (P017)
952 2 1 1 oracle@node2 (P018)
952 1 sqlplus@node1 (TNS V1-V3)
Again, connecting on a node2 services allows load balancing between the nodes
<!--[if !supportLists]--> <!--[endif]-->
•Being able to fire PQ slaves on different instances in order to use all available resources for most of the aggregation queries.
•Based on a tnsname connection, being able to restrict PQ access to one node because external tables may be used. Slaves running on the wrong node would fail :
insert /*+ append parallel(my_table,4) */ into my_table
ERROR at line 1:
ORA-12801: error signaled in parallel query server P001, instance node-db04:INST2 (2)
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file myfile408.12.myfile_P22.20070907.0.txt in MYDB_INBOUND_DIR not found
ORA-06512: at “SYS.ORACLE_LOADER”, line 19
I could come up with two basic configurations based on combinations of services, INSTANCE_GROUPS and PARALLEL_INSTANCE_GROUP. This post is about an asymmetric configuration, the second will deal with a symmetric configuration.
Parallel executions are services aware: the slaves inherit the data base service from the coordinator. But the coordinator may execute slaves on any instance of the cluster, which means that the service localization (‘PREFERRED’ instance) will create the coordinator on the henceforth designed instance, but the slaves are not bound by the service configuration. So I rewrote “being able to restrict PQ access to one node” into “being able to restrict PQ access and the coordinator to one node”
An instance belongs to the instance group it declares in its init.ora/spfile.ora. Each instance may have several instance groups declared: INSTANCE_GROUPS =’IG1’,’IG2’ (be careful not to set ‘IG1,IG2’). PARALLEL_INSTANCE_GROUP is another parameter which can be either configured at the system or at session level. Sessions may fire PQ slaves on instances for which the parallel instance group matches and instance group the instance belongs to.
Let’s consider a 2 nodes configuration: node1 and node2. I1 and I2 are instances respectively running on node1 and node2.
The Oracle Data Warehouse documentation guide states that a SELECT statement can be parallelized for objects schema created with a PARALLEL declaration only if the query involves either a full table table scan or an inter partition index range scan. I’ll force a full table scan in my tests case to have the CBO decide to pick up a parallel plan:
select /*+ full(orders_part)*/ count(*) from orders_part;
The configuration described below will allow users :
•<!--[if !supportLists]--> <!--[endif]-->connected to node1 to execute both the coordinator and slaves on node1, and prevent the slaves to spill on node2
•<!--[if !supportLists]--> <!--[endif]-->connected to node2 but unable to do an alter session (because the code belongs to an external provider) to load balance their queries across the nodes
•<!--[if !supportLists]--> <!--[endif]-->connected to node 1 and able to issue an alter session to load balance.
In a nutshell, users connecting to node1 will restrict their slave scope to node1, users connecting to node2 will be allowed to load balance their slaves over all the nodes.
I set for the test parallel_min_servers=0: I can then see the slaves starting whenever Oracle decides to fire them.
Asymmetric INSTANCE_GROUPS configuration:
On node1, spfileI1.ora looks like:
INSTANCE_GROUPS =’IG1’,’IG2’
PARALLEL_INSTANCE_GROUP=’IG1’
On node2, spfileI2.ora contains:
INSTANCE_GROUPS =’IG2’
PARALLEL_INSTANCE_GROUP=’IG2’
select inst_id, name, value from gv$parameter where name like ‘%instance%group%’;
INST_ID NAME VALUE
———- ——————————————————————————-
1 instance_groups IG1, IG2
1 parallel_instance_group IG1
2 instance_groups IG2
2 parallel_instance_group IG2
Single node access
I configured via dbca nodetaf1, a service for which node1/instance I1 was the preferred node and node was set to “not use” (I do not want to connect on one node an execute on another – thereby unnecessarily clogging the interconnect —) and did the same for node2/instance I2.
SQL> select inst_id,name from gv$active_services where name like ‘%taf%’;
INST_ID NAME
———- —————————————————————-
2 nodetaf2
1 nodetaf1
The related TNSNAMES entries for a node1 only access looks like:
Node1-only =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
)
(LOAD_BALANCING=NO)
(CONNECT_DATA =
(SERVICE_NAME= nodetaf1)
(SID = EMDWH1)
(SERVER = DEDICATED)
)
)
Note that the host name is not the vip address (because there is no point in switching node should node1 fail).
Test results:
$ sqlplus mylogin/mypass@node1-only
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
select qcsid, p.inst_id “Inst”, p.server_group “Group”, p.server_set “Set”,s.program
from gv$px_session p,gv$session s
where s.sid=p.sid and qcsid=&1
order by qcinst_id , p.inst_id,server_group,server_set
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
938 1 1 1 oracle@node1 (P000)
938 1 1 1 oracle@node1 (P004)
938 1 1 1 oracle@node1 (P002)
938 1 1 1 oracle@node1 (P005)
938 1 1 1 oracle@node1 (P006)
938 1 1 1 oracle@node1 (P001)
938 1 1 1 oracle@node1 (P007)
938 1 1 1 oracle@node1 (P003)
938 1 sqlplus@node1 (TNS V1-V3)
The coordinator and the slaves stay on node1
Dual node access
I then added a service aimed at firing slave executions on both nodes. The ‘bothnodestaf’ was added using dbca and then modified to give it “goal_throughput” and “clb_goal_short” load balancing advisories: according to the documentation, load balancing is based on rate that works is completed in service plus available bandwidth. I’ll dig into that one day to get a better understanding of the LB available strategies.
execute dbms_service.modify_service (service_name => ‘bothnodestaf’ -
, aq_ha_notifications => true -
, goal => dbms_service.goal_throughput -
, failover_method => dbms_service.failover_method_basic -
, failover_type => dbms_service.failover_type_select -
, failover_retries => 180 -
, failover_delay => 5 -
, clb_goal => dbms_service.clb_goal_short);
In the tnsnames.ora:
bothnodes =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = bothnodestaf)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
(SERVER = DEDICATED)
)
)
Note that LOAD_BALANCE is unset (set to OFF) in the tnsnames.ora to allow server-side connection balancing
On node1:
$ sqlplus mylogin/mypass@bothnodes
alter session set parallel_instance_group=’IG2′;
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
936 1 1 1 oracle@node1 (P002)
936 1 1 1 oracle@node1 (P003)
936 1 1 1 oracle@node1 (P001)
936 1 1 1 oracle@node1 (P000)
936 1 1 1 oracle@node2 (P037)
936 1 1 1 oracle@node2 (P027)
936 2 1 1 oracle@node1 (O001)
936 2 1 1 oracle@node2 (P016)
936 2 1 1 oracle@node2 (P019)
936 2 1 1 oracle@node2 (P017)
936 2 1 1 oracle@node2 (P018)
936 1 sqlplus@node1 (TNS V1-V3)
The slaves started on both nodes.
On node2:
$ sqlplus mylogin/mypass@bothnodes
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ——————————————————————————-
952 1 1 1 oracle@node1 (P002)
952 1 1 1 oracle@node1 (P003)
952 1 1 1 oracle@node1 (P001)
952 1 1 1 oracle@node1 (P000)
952 1 1 1 oracle@node2 (P037)
952 1 1 1 oracle@node2 (P027)
952 2 1 1 oracle@node1 (O001)
952 2 1 1 oracle@node2 (P016)
952 2 1 1 oracle@node2 (P019)
952 2 1 1 oracle@node2 (P017)
952 2 1 1 oracle@node2 (P018)
952 1 sqlplus@node1 (TNS V1-V3)
Again, connecting on a node2 services allows load balancing between the nodes
If you have not done so, it would be beneficial to first go through the first post “Strategies for parallelized queries across RAC instances” to get an understanding of the concepts and challenge. I came up with a description of an asymmetric strategy for the handling of RAC inter-instance parallelized queries but still being able to force both the slaves and the coordinator to reside on just one node. The asymmetric configuration is a connexion and service based scheme which allow users
•connected to node1 to execute both the coordinator and slaves on node1, and prevent the slaves to spill on node2
•connected to node2 but unable to do an alter session (because the code belongs to an external provider) to load balance their queries across the nodes
•connected to node 1 and able to issue an alter session to load balance.
Another way of doing things is to load balance the default service to which the applications connect to, and to restrict access to node1 and to node2. This is a symmetric configuration.
Load balancing:
Tnsnames and service names are the same as in the asymmetric configuration.
bothnodes =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = bothnodestaf)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
(SERVER = DEDICATED)
)
)
execute dbms_service.modify_service (service_name => ‘bothnodestaf’ -
, aq_ha_notifications => true -
, goal => dbms_service.goal_throughput -
, failover_method => dbms_service.failover_method_basic -
, failover_type => dbms_service.failover_type_select -
, failover_retries => 180 -
, failover_delay => 5 -
, clb_goal => dbms_service.clb_goal_short);
Both spfiles were changed:
On node1, spfileI1.ora contains:
INSTANCE_GROUPS =’IG1’,’IGCLUSTER’
PARALLEL_INSTANCE_GROUP=’IGCLUSTER’
On node2, spfileI2.ora contains:
INSTANCE_GROUPS =’IG2’, ’IGCLUSTER’
PARALLEL_INSTANCE_GROUP=’IGCLUSTER’
select inst_id, name, value from gv$parameter where name like ‘%instance%group%’;
INST_ID NAME VALUE
———- ——————————————————————————-
1 instance_groups IG1, IGCLUSTER
1 parallel_instance_group IGCLUSTER
2 instance_groups IG2, IGCLUSTER
2 parallel_instance_group IGCLUSTER
By default, both nodes will behave in the same way. As the PARALLEL_INSTANCE_GROUP matches one of the INSTANCE_GROUPS on both nodes, load balancing will work by default whatever the node on which the applications connects.
On node 1:
$sqlplus mylogin/mypass@bothnodes
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
1050 1 1 1 oracle@node1 (P010)
1050 1 1 1 oracle@node1 (P011)
1050 1 1 1 oracle@node1 (P009)
1050 1 1 1 oracle@node1 (P008)
1050 2 1 1 oracle@node2 (P009)
1050 2 1 1 oracle@node2 (P011)
1050 2 1 1 oracle@node2 (P010)
1050 2 1 1 oracle@node2 (P008)
1050 1 sqlplus@node1 (TNS V1-V3)
On node 2:
$ sqlplus mylogin/mypass@bothnodes
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
997 1 1 1 oracle@node1 (P010)
997 1 1 1 oracle@node1 (P011)
997 1 1 1 oracle@node1 (P009)
997 1 1 1 oracle@node1 (P008)
997 2 1 1 oracle@node2 (P009)
997 2 1 1 oracle@node2 (P011)
997 2 1 1 oracle@node2 (P010)
997 2 1 1 oracle@node2 (P008)
997 2 sqlplus@node2 (TNS V1-V3)
You may notice a subtle difference: the coordinator runs as expected on the node the service connects to.
Node restriction:
The tnsnames.ora and the service definition are left unchanged from the tests performed in the previous post. nodetaf1 is set to node1= ‘PREFERRED’ and node2=’NONE’
On node1:
Tnsnames.ora: (unchanged)
node1-only =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
)
(LOAD_BALANCING=NO)
(CONNECT_DATA =
(SERVICE_NAME= nodetaf1)
(SID = EMDWH1)
(SERVER = DEDICATED)
)
)
$ sqlplus mylogin/mypass@node1-only
alter session set parallel_instance_group=’IG1′;
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
984 1 1 1 oracle@node1 (P000)
984 1 1 1 oracle@node1 (P004)
984 1 1 1 oracle@node1 (P002)
984 1 1 1 oracle@node1 (P005)
984 1 1 1 oracle@node1 (P006)
984 1 1 1 oracle@node1 (P001)
984 1 1 1 oracle@node1 (P007)
984 1 1 1 oracle@node1 (P003)
984 1 sqlplus@node1 (TNS V1-V3)
On node 2:
The tnsnames.ora and service definition are symmetric to what they are on node1
$ sqlplus mylogin/mypass@node2-only
alter session set parallel_instance_group=’IG2′;
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
952 1 1 1 oracle@node2 (P000)
952 1 1 1 oracle@node2 (P004)
952 1 1 1 oracle@node2 (P002)
952 1 1 1 oracle@node2 (P005)
952 1 1 1 oracle@node2 (P006)
952 1 1 1 oracle@node2 (P001)
952 1 1 1 oracle@node2 (P007)
952 1 1 1 oracle@node2 (P003)
952 1 sqlplus@node2 (TNS V1-V3)
Comments (1)
September 12, 2007
Strategies for RAC inter-instance parallelized queries (part 1/2)
Filed under: Oracle,RAC — christianbilien @ 8:32 pm
I recently had to sit down and think about how I would spread workloads across nodes in a RAC data warehouse configuration. The challenge was quite interesting, here were the specifications:
<!--[if !supportLists]--> <!--[endif]-->
•Being able to fire PQ slaves on different instances in order to use all available resources for most of the aggregation queries.
•Based on a tnsname connection, being able to restrict PQ access to one node because external tables may be used. Slaves running on the wrong node would fail :
insert /*+ append parallel(my_table,4) */ into my_table
ERROR at line 1:
ORA-12801: error signaled in parallel query server P001, instance node-db04:INST2 (2)
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file myfile408.12.myfile_P22.20070907.0.txt in MYDB_INBOUND_DIR not found
ORA-06512: at “SYS.ORACLE_LOADER”, line 19
I could come up with two basic configurations based on combinations of services, INSTANCE_GROUPS and PARALLEL_INSTANCE_GROUP. This post is about an asymmetric configuration, the second will deal with a symmetric configuration.
Parallel executions are services aware: the slaves inherit the data base service from the coordinator. But the coordinator may execute slaves on any instance of the cluster, which means that the service localization (‘PREFERRED’ instance) will create the coordinator on the henceforth designed instance, but the slaves are not bound by the service configuration. So I rewrote “being able to restrict PQ access to one node” into “being able to restrict PQ access and the coordinator to one node”
An instance belongs to the instance group it declares in its init.ora/spfile.ora. Each instance may have several instance groups declared: INSTANCE_GROUPS =’IG1’,’IG2’ (be careful not to set ‘IG1,IG2’). PARALLEL_INSTANCE_GROUP is another parameter which can be either configured at the system or at session level. Sessions may fire PQ slaves on instances for which the parallel instance group matches and instance group the instance belongs to.
Let’s consider a 2 nodes configuration: node1 and node2. I1 and I2 are instances respectively running on node1 and node2.
The Oracle Data Warehouse documentation guide states that a SELECT statement can be parallelized for objects schema created with a PARALLEL declaration only if the query involves either a full table table scan or an inter partition index range scan. I’ll force a full table scan in my tests case to have the CBO decide to pick up a parallel plan:
select /*+ full(orders_part)*/ count(*) from orders_part;
The configuration described below will allow users :
•<!--[if !supportLists]--> <!--[endif]-->connected to node1 to execute both the coordinator and slaves on node1, and prevent the slaves to spill on node2
•<!--[if !supportLists]--> <!--[endif]-->connected to node2 but unable to do an alter session (because the code belongs to an external provider) to load balance their queries across the nodes
•<!--[if !supportLists]--> <!--[endif]-->connected to node 1 and able to issue an alter session to load balance.
In a nutshell, users connecting to node1 will restrict their slave scope to node1, users connecting to node2 will be allowed to load balance their slaves over all the nodes.
I set for the test parallel_min_servers=0: I can then see the slaves starting whenever Oracle decides to fire them.
Asymmetric INSTANCE_GROUPS configuration:
On node1, spfileI1.ora looks like:
INSTANCE_GROUPS =’IG1’,’IG2’
PARALLEL_INSTANCE_GROUP=’IG1’
On node2, spfileI2.ora contains:
INSTANCE_GROUPS =’IG2’
PARALLEL_INSTANCE_GROUP=’IG2’
select inst_id, name, value from gv$parameter where name like ‘%instance%group%’;
INST_ID NAME VALUE
———- ——————————————————————————-
1 instance_groups IG1, IG2
1 parallel_instance_group IG1
2 instance_groups IG2
2 parallel_instance_group IG2
Single node access
I configured via dbca nodetaf1, a service for which node1/instance I1 was the preferred node and node was set to “not use” (I do not want to connect on one node an execute on another – thereby unnecessarily clogging the interconnect —) and did the same for node2/instance I2.
SQL> select inst_id,name from gv$active_services where name like ‘%taf%’;
INST_ID NAME
———- —————————————————————-
2 nodetaf2
1 nodetaf1
The related TNSNAMES entries for a node1 only access looks like:
Node1-only =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
)
(LOAD_BALANCING=NO)
(CONNECT_DATA =
(SERVICE_NAME= nodetaf1)
(SID = EMDWH1)
(SERVER = DEDICATED)
)
)
Note that the host name is not the vip address (because there is no point in switching node should node1 fail).
Test results:
$ sqlplus mylogin/mypass@node1-only
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
select qcsid, p.inst_id “Inst”, p.server_group “Group”, p.server_set “Set”,s.program
from gv$px_session p,gv$session s
where s.sid=p.sid and qcsid=&1
order by qcinst_id , p.inst_id,server_group,server_set
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
938 1 1 1 oracle@node1 (P000)
938 1 1 1 oracle@node1 (P004)
938 1 1 1 oracle@node1 (P002)
938 1 1 1 oracle@node1 (P005)
938 1 1 1 oracle@node1 (P006)
938 1 1 1 oracle@node1 (P001)
938 1 1 1 oracle@node1 (P007)
938 1 1 1 oracle@node1 (P003)
938 1 sqlplus@node1 (TNS V1-V3)
The coordinator and the slaves stay on node1
Dual node access
I then added a service aimed at firing slave executions on both nodes. The ‘bothnodestaf’ was added using dbca and then modified to give it “goal_throughput” and “clb_goal_short” load balancing advisories: according to the documentation, load balancing is based on rate that works is completed in service plus available bandwidth. I’ll dig into that one day to get a better understanding of the LB available strategies.
execute dbms_service.modify_service (service_name => ‘bothnodestaf’ -
, aq_ha_notifications => true -
, goal => dbms_service.goal_throughput -
, failover_method => dbms_service.failover_method_basic -
, failover_type => dbms_service.failover_type_select -
, failover_retries => 180 -
, failover_delay => 5 -
, clb_goal => dbms_service.clb_goal_short);
In the tnsnames.ora:
bothnodes =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = myvip2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = bothnodestaf)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
(SERVER = DEDICATED)
)
)
Note that LOAD_BALANCE is unset (set to OFF) in the tnsnames.ora to allow server-side connection balancing
On node1:
$ sqlplus mylogin/mypass@bothnodes
alter session set parallel_instance_group=’IG2′;
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ———- ———- ———- ————————————————
936 1 1 1 oracle@node1 (P002)
936 1 1 1 oracle@node1 (P003)
936 1 1 1 oracle@node1 (P001)
936 1 1 1 oracle@node1 (P000)
936 1 1 1 oracle@node2 (P037)
936 1 1 1 oracle@node2 (P027)
936 2 1 1 oracle@node1 (O001)
936 2 1 1 oracle@node2 (P016)
936 2 1 1 oracle@node2 (P019)
936 2 1 1 oracle@node2 (P017)
936 2 1 1 oracle@node2 (P018)
936 1 sqlplus@node1 (TNS V1-V3)
The slaves started on both nodes.
On node2:
$ sqlplus mylogin/mypass@bothnodes
< no alter session set parallel_instance_group= >
select /*+ full(orders_part) */ count(*) from orders_part;
QCSID Inst Group Set PROGRAM
———- ——————————————————————————-
952 1 1 1 oracle@node1 (P002)
952 1 1 1 oracle@node1 (P003)
952 1 1 1 oracle@node1 (P001)
952 1 1 1 oracle@node1 (P000)
952 1 1 1 oracle@node2 (P037)
952 1 1 1 oracle@node2 (P027)
952 2 1 1 oracle@node1 (O001)
952 2 1 1 oracle@node2 (P016)
952 2 1 1 oracle@node2 (P019)
952 2 1 1 oracle@node2 (P017)
952 2 1 1 oracle@node2 (P018)
952 1 sqlplus@node1 (TNS V1-V3)
Again, connecting on a node2 services allows load balancing between the nodes
发表评论
-
dbms_output can not put the zero
2011-08-25 09:29 760DECLARE V_INTA NUMBER ... -
what is the difference between object_id and data_object_id?
2011-08-24 09:17 959The object_id is the primary k ... -
oracle EXECUTE IMMEDIATE ora-00911
2011-08-14 10:15 1519I get an error when I try to ex ... -
Will the valid status of index impact dml operation?
2011-08-05 10:34 852DROP TABLE tab01; SELECT * FRO ... -
where can i find the job number of those jobs defined in dba_scheduler_jobs?
2011-08-01 10:41 856Question: Hello, could anybody ... -
Listener HPUX Error: 242: No route to host
2011-05-17 14:55 989现象: 引用LSNRCTL> status Conne ... -
一进程阻塞问题解决
2011-05-12 16:38 4126同事反映,删除一条数据总是没有反应,请求协助解决. 问题非常 ... -
open database with ORA-00704 and ORA-39700
2011-05-06 16:13 29251,Error 1)alter.log Fri May ... -
oracle text index create and use
2011-05-06 13:41 1925一、Install Text Index 1,The ste ... -
offline datafile and offline tablespace
2011-05-04 11:43 25341)offline datafile OFFLINE Spe ... -
oracle three type of block size
2011-04-28 17:35 776Tools: 引用[oracle@node oracle]$ ... -
bbed一(安装)
2011-04-26 14:54 1519bbed ----------------------- bl ... -
Enable Row Movement in Partitioning and Overhead
2011-04-24 14:03 1567Question 1: Hi, I am partitio ... -
Row Movement in Oracle
2011-04-23 22:23 2003One of the relatively newer fea ... -
ORA-14402 updating partition key column
2011-04-23 19:48 6391做DBA几年来,经常遇到项目到了维护期总是修改表的结构,原因很 ... -
ORACLE DSI 介绍
2011-04-19 18:33 882DSI是Data Server Internals的缩写,是O ... -
Oracle / Buffer cache
2011-04-19 17:18 799引用8.7 Tuning the Operating Syst ...
相关推荐
Oracle Solaris 11.3 Strategies for Network Administration-56
Storm Applied is a practical guide to using Apache Storm for the real-world tasks associated with processing and analyzing real-time data streams. This immediately useful book starts by building a ...
SIGMOD2020 论文“Recommending Deployment Strategies for Collaborative Tasks”的翻译,该翻译为较小粗粒度的翻译,大概达到原文意思的90%
Strategies for E-business Creating Value through Electronic and Mobile Commerce
计算机视觉Github开源论文
Optimal repair strategies for a two-unit deteriorating standby system
manning新书 Storm Applied: Strategies for real-time event processing
Storm.Applied.Strategies for real-time event processing2015.3.pdf
Translation-Strategies-of-Chinese-Neologism-into-English-商英.doc
The clinical teaching model: Clinical insights and strategies for the learning-disabled child. New York: Brunner/Mazel, 203 pp., [dollar]25.00 Book Reviews 417 to one another often. Therefore, ...
Strategies of Instantaneous Compensation for Three-Phase Four-Wire Circuits.pdf
Control Strategies for Advanced Driver Assistance Systems and Autonomous Driving Functions is a collection of articles by international experts in the field representing theoretical and application-...
关于量化交易的一本书。 R-Breaker的作者richard sandenberg在其中有一段代码
Storm Applied is a practical guide to using Apache Storm for the real-world tasks associated with processing and analyzing real-time data streams. This immediately useful book starts by building a ...
design-strategies.pdfdesign-strategies.pdf design-strategies.pdf design-strategies.pdf design-strategies.pdf
// 在最新成交价格距离该笔委托超过委托深度2时自动撤单并重新进行委托// 订单没有完成if (ticker.Last > LastBuyPrice && (
策略名称三角套利-基础版策略作者红色的雪from time import sleeptax = 0.0015 #交易费率,0.15%# 基础行情数据处理,根据传
sigmod 2020 论文Recommending Deployment Strategies for Collaborative Tasks
红色代码病毒matlab仿真In-silico-trials-for-combination-strategy-for-enhanced-vesicular-stomatosis-oncolytic-virus 溶瘤痘苗(一种增强病毒)和水泡性口炎病毒 (VSV) 组合的计算生物学模型的计算机试验。 此...