Documentos de Académico
Documentos de Profesional
Documentos de Cultura
winter
2014
OTECH
MAGAZINE
Michael Rosenblum
Challenging Dynamic
SQL Myths
Lonneke Dikmans
Lucas Jellema
Choosing the
right Mobile
Architecture
Foreword
New Years
Resolutions
The holiday season is upon us. This is the season
to start reflecting on the past year and look towards the future. Most people start thinking of
brilliant new years resolutions to make amends
on the faults discover during their reflections.
2014 has been a wonderful year in many aspects.
Globally the world economy has begun to climb
out of recession and technology innovation has
come in a rapid change. As is our field of work.
There are more and more new technologies
added to the stack of tools that are the ITdepartments. But that also means that there
are more and more technologies to choose
from to solve business problems.
As I see it, 2014 is a grand year of change.
Finally, topics like Big Data and The Internet of
Things, have shifted from tech-savvy interest2 OTech Magazine #3 May 2014
want your ad in
otech magazine?
click here to
download the
media kit
Content
dear patrick
Patrick Barel
49
Goran Stankovski
Challenging Dynamic SQL Myths
8
Choosing the right
Michael Rosenblum
Mobile Architecture
How to protect your
55
Lonneke Dikmans
Flashback - Empowering
14
Power Users!
Anar Godjaev
Biju Thomas
Data Warehouses
20
Michelle Kolbe
Lucas Jellema
(Part I)
Management Pack
32
64
73
90
Mahir M Quluzade
Phillip Brown
Time Series Forecasting in SQL
Starting WebLogic
39
Cato Aune
98
dear patrick...
Patrick Barel
Dear Lillian,
Dear Patrick,
What is an ANTI-JOIN?
And what is the difference
between the SEMI-JOIN
and the ANTI-JOIN?
Lillian Sturdey
First of all, both SEMI-JOIN and ANTI-JOIN are not in the SQL syntax but
they are more a pattern. You might expect to be able to write something
like:
[PATRICK]SQL>SELECT d.deptno, d.dname, d.loc
2
FROM dept d
3
SEMI JOIN emp e ON (e.deptno = d.deptno)
4 /
to get the departments with no employees. But all you get is an error
saying your command is not properly ended, which can be read as a
syntax error.
ERROR at line 3:
ORA-00933: ORA-00933 SQL command not properly ended.
Maybe your first idea would be to use a normal join to get all the
departments with at least one employee:
Dear patrick
2/3
Patrick Barel
3
JOIN emp e ON (e.deptno = d.deptno)
4 /
4 /
But this results in a record for every row in the EMP table. And we only
wanted every unique department.
DEPTNO
---------20
30
30
20
30
30
10
20
10
30
20
DNAME
-------------RESEARCH
SALES
SALES
RESEARCH
SALES
SALES
ACCOUNTING
RESEARCH
ACCOUNTING
SALES
RESEARCH
LOC
------------DALLAS
CHICAGO
CHICAGO
DALLAS
CHICAGO
CHICAGO
NEW YORK
DALLAS
NEW YORK
CHICAGO
DALLAS
DEPTNO
---------30
20
10
DNAME
-------------SALES
RESEARCH
ACCOUNTING
LOC
------------CHICAGO
DALLAS
NEW YORK
14 rows selected.
Well, thats easy enough, you think, just add a DISTINCT to the statement:
[PATRICK]SQL>SELECT DISTINCT d.deptno, d.dname, d.loc
2
FROM dept d
DEPTNO
---------20
10
30
DNAME
-------------RESEARCH
ACCOUNTING
SALES
LOC
------------DALLAS
NEW YORK
CHICAGO
DNAME
-------------RESEARCH
SALES
ACCOUNTING
LOC
------------DALLAS
CHICAGO
NEW YORK
This is exactly what we want to see but for big tables this is not the
correct way to go. For every record in the dept table all the records in
Dear patrick
3/3
Patrick Barel
DEPTNO DNAME
LOC
---------- -------------- ------------40 OPERATIONS
BOSTON
Hope this gives you a bit more insight in this subject and gives you a
better understanding of the wonders of the SQL language. Notice there
are many ways to reach the same result, but one approach might be
more economical than the other.
Please note that with the current optimizer in the database Oracle will
rewrite your query to use the best approach for the task. If the inner
table (in our example EMP) is rather small, then the IN approach might
be the best, in other cases it might be better to use the EXISTS approach.
Where in earlier versions you had to think about which way to go (IN is
better for small tables, EXISTS is better for big ones), you can now rely
on the optimizer to make the correct choice.
If you would want to see exactly the opposite of this query, i.e. all departments with no employees, you use an ANTI-JOIN pattern, which is pretty
much the same but in this case you use NOT IN or NOT EXISTS. A different
approach, which I think is pretty nice is to use an OUTER JOIN and check
for the non-existence of values in column for the OUTER JOINED table.
[PATRICK]SQL>SELECT
2
FROM
3
LEFT
4 WHERE
5 /
Happy Oracleing,
Patrick Barel
If you have any comments on this subject or you have a question you
want answered, please send an email to patrick@otechmag.com.
If I know the answer, or can find it for you, maybe I can help.
Michael Rosenblum
www.dulcian.com
Challenging
Dynamic SQL
Myths
Challenging
dynamic SQL myths
2/6
Michael Rosenblum
For all of the years that Dynamic SQL was a part of the Oracle technology
stack, it developed its own mythology. The reason for these myths is the
magnitude of changes that Oracle has applied to Dynamic SQL over the
last decade. The problem is that some of these myths are simply wrong
and they have always been wrong. Dynamic SQL needs to be used with
care but most conventional wisdom about dynamic SQL is in full, or in
part, incorrect.
It is fair to say that limitations existed in previous versions of Dynamic
SQL. Many of these limitations have been removed or reduced in the
current release of the Oracle DBMS. As a result, most problems that you
hear about should always be met with the question: In which version did
you experience this issue?
Unfortunately, people often listen to the detractors and thereby fail
to use Dynamic SQL where it could save them a great deal of time and
effort. This section challenges the most common myths and provides
evidence to show why they are wrong.
1. Always use bind variables to pass data inside of dynamically constructed code modules. Never concatenate. Since bind variables are processed after the code is parsed, unexpected values (such as OR 1=1)
will not be able to impact what is being executed.
2. The datatypes of these variables should match the datatypes of the
defined parameters. This eliminates the risk of somebody manipulating NLS-settings and sneaking malicious code into an implicit datatype
conversion mechanism.
3. If you need to concatenate any structural elements (columns, tables,
and so on), always use the DBMS_ASSERT package. This package has
a number of APIs to ensure that input values are what they should be.
For example:
a. SQL_OBJECT_NAME (string) checks whether the string is a valid
object.
b. SIMPLE _SQL_NAME(string) checks whether the string is a valid
simple SQL name.
c. SCHEMA_NAME(string) validates that the passed string is a valid
schema.
It is also important to remember that technologies similar to Dynamic
SQL exist in other environments, and not only in PL/SQL. For example,
it is reasonably common for middle-tier developers to build their SQL
statements on the fly and even open that functionality to end-users using
ad-hoc query tools. You need to be aware that providing users with this
capability gives them far more power than you might expect and opens
Challenging
dynamic SQL myths
3/6
Michael Rosenblum
new holes faster than you can programmatically close. Any environment
that allows end-users to enter real code (even in the form of customized
WHERE-clauses) should be considered a major security threat, no matter
how carefully it is constructed.
SQL>
2
3
4
5
6
7
8
9
10
11
12
SQL>
MinDate:20111103
The main advantage of Dynamic SQL is that you can fine-tune your code
at runtime. By adding a small extra layer of complexity, you can utilize the
knowledge that was not available at compilation time. This extra knowledge is what can make Dynamic SQL-based solutions more efficient than
their hard-coded counterparts.
In general, Dynamic SQL thrives on the unknown. The less information
that is available now, the more valuable its usage. For example, even
though the classic problem of object IN-list default cardinality of 8168
Challenging
dynamic SQL myths
4/6
Michael Rosenblum
enough to completely skip soft parses. The proof can be demonstrated
by running the following script and checking the trace:
SQL>
SQL>
SQL>
2
3
4
5
6
7
8
9 /
Obviously, both queries were parsed only once. Exactly the same
optimization is applicable to a FORALL statement in addition to firing
all INSERTs as a single roundtrip:
SQL>
SQL>
2
3
4
5
6
7
8
9
10
11
12
13
Challenging
dynamic SQL myths
5/6
Michael Rosenblum
<<<<<<<<<<<<<< Extract from the TKRPROF report >>>>>>>>>>>>>>>>>>
SQL ID: 7uawqxwvd81jc Plan Hash: 0
INSERT INTO dynamic_sql_q3(a) VALUES (:1)
call
count
cpu
elapsed
disk
query
current
rows
------- ------ -------- ---------- -------- ---------- ---------- ---------Parse
1
0.00
0.00
0
0
0
0
Execute
1
0.01
0.01
0
4
45
50
Fetch
0
0.00
0.00
0
0
0
0
------- ------ -------- ---------- -------- ---------- ---------- ---------total
2
0.01
0.01
0
4
45
50
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 83
(recursive depth: 1)
Number of plan statistics captured: 1
The report above shows perfect results. For 50 processed rows, there
is only one PARSE step and one EXECUTE step. This is solid proof that
Dynamic SQL is no longer guilty of expending too many additional
resources while preparing statements on the fly.
scripts. They can be directly integrated into PL/SQL code and become a
critical part of the overall solution. Although, creating real database objects on the fly in many formally regulated environments can indeed cross
the line, there are legitimate cases that will still pass the strictest checks.
One such case comes from my experience maintaining a multi-tier system
where the communication between the middle tier and the database was
done using a connection pool. Eventually, the following problem was
observed. At the end of the day, each session from the pool locked a significant number of segments from the TEMP tablespace. The reason was
that one part of the system used global temporary tables (GTT), defined
as ON COMMIT PRESERVE. Once they acquire a segment, these tables
do not release it even if you delete all rows. Obviously, the only way to
reset the high-water mark is to use the TRUNCATE command. However,
TRUNCATE is a DDL statement and cannot be used in straight PL/SQL!
Thats where Dynamic SQL comes to the rescue. The following module
resets all GTTs in the defined session if any one of them was touched:
PROCEDURE p_truncate IS
v_exist_yn VARCHAR2(1);
BEGIN
SELECT Y
INTO v_exist_yn
FROM v$session s,
v$tempseg_usage u
WHERE s.audsid = SYS_CONTEXT(USERENV,SESSIONID)
AND s.saddr = u.session_addr
AND u.segtype = DATA
AND rownum = 1;
Challenging
dynamic SQL myths
6/6
Michael Rosenblum
FOR c IN (SELECT table_name FROM user_tables
WHERE TEMPORARY = Y and DURATION = SYS$SESSION) LOOP
EXECUTE IMMEDIATE truncate table ||c.table_name;
END LOOP;
END;
Summary
The biggest challenge in learning Dynamic SQL is to get past your initial
fear of this feature. Of course, with any advanced technology comes risk
with its misuse. In the case of Dynamic SQL, the chances of encountering
security or performance issues often outweigh the potential benefits.
Anar Godjaev
www.yapikredi.com.tr
ACE Associate
Blog: anargodjaev.wordpress.com/
How to protect
your sensitive data
using Oracle Database
Vault / Creating and
Testing Realms Part II
database changes, and enforces controls over how, when and where
application data can be accessed. Oracle Database Vault provides security
benefits to customers even when they have a single DBA by:
Preventing hackers from using privileged users accounts to steal
application data
Protecting database structures from unauthorized and/or harmful
changes
Enforcing controls over how, when and where application data can be
accessed
Securing existing database environments transparently and without
any application changes
Among the more common audit findings are unauthorized changes to
database entitlements, including grants of the DBA role, as well as new
accounts and database objects. Preventing unauthorized changes to
production environments is important not only for security, but also for
compliance as such changes can weaken security and open doors to hackers, violating privacy and compliance regulations. Oracle Database Vault
SQL Command Controls allow customers to control operations inside the
database, including commands such as create table, truncate table, and
create user. Various out-of-the-box factors such as IP address, authentication method, and program name help implement multi-factor authorization to deter attacks leveraging stolen passwords. These controls prevent
accidental configuration changes and also prevent hackers and malicious
insiders from tampering with applications. The Duty Separation feature
Oracle customers today still have hundreds and even thousands of databases distributed throughout the enterprise and around the world. However, database consolidation will continue as a cost-saving strategy in the
coming years. The physical security provided by the distributed database
architecture must be available in the consolidated environment. Oracle
Database Vault addresses the primary security concerns of database
consolidation.
During the setup procedure, one of the main objective was to ensure that
the users with the high privileges was not able to access HR data but
could still administer the database containing the HR Data Realm. . Once
the realm was named and enabled, we selected Audit on failure in order
to send a notification if rules are violated.These are referred to as Realm
Secured Objects. For each object in realm owner, object type and name
need to be specified. In this case, we used the wildcard (%) option to
identify all objects owned by the HR user.
Some employees will need authorization to modify the database as business needs dictate. After running the test above, the user, HR, was added
to HR Data Realm using realm authorizations.
Source :
Oracle Database Vault Administrators Guide 11g Release 2 (11.2)
Michelle Kolbe
www.backcountry.com
12c Partitioning
for Data
Warehouses
12c partitioning
for data warehouses
2/11
Michelle Kolbe
A feature of the Oracle database since Oracle 8.0 in 1997, Oracle Partitioning enables large tables and indexes to be split into smaller pieces to
improve performance, availability, and manageability. Queries are sped
up by orders of magnitudes and a key feature for data warehouse environment implementations. These smaller pieces that the table is split
into are called partitions. Each partition has its own name and optionally
its own storage characteristics. Partitions can be used to spread out and
balance IO across different storage devices. The DBA is able to manage
the partitioned table as a whole or as individual pieces. The cost based
optimizer can use partition pruning to read only the blocks from specific
partitions into memory based on the where clause filters in the query.
Partitioning Strategies
Using a partitioning key, a set of columns used to determine which partition a row will exist in, tables and indexes can be partitioned using one or
many of multiple strategies.
Range: Data is split based on a range of values of the partitioning key,
most commonly a date field. Partitions are defined by their upper limit
with the lower limit being defined by the upper limit of the preceding
partition. The last partition can optionally be open-ended with no limit
to avoid errors when inserting new data.
List: A discrete list of values is used for distributing the data for example, a list of states may be used for the partitioning key. A DEFAULT
partition is used to catch any values that may not fall into one of the
defined list values.
12c partitioning
for data warehouses
3/11
Michelle Kolbe
New Features
Interval-Reference Partitioning
New in 12c is the ability to composite partition first by interval then reference. With this method, new partitions will be created when the data
arrives and the child tables will be automatically maintained with the new
data. The partition names will be inherited from already existing partitions.
An example
Lets check what partitions we have. There is one partition in each table.
select table_name, partition_name, high_value, interval
from user_tab_partitions
where lower(table_name) in (orders, orderlines);
If I try to create the following two tables in Oracle 11g, the first statement
will succeed but the second will give me an error.
create table orders
(
order_number number,
order_date_id number,
constraint orders_pk primary key(order_number)
)
partition by range(order_date_id) INTERVAL(7)
(
partition p1 values less than (20140101)
);
table ORDERS created.
create table orderlines
(
orderline_id number,
order_number number not null,
constraint orderlines_pk primary key(orderline_id),
constraint orderlines_fk foreign key (order_number) references orders
)
partition by reference(orderlines_fk);
ORA-14659: Partitioning method of the parent table is not supported
Now lets insert some data in the the orders table and check the
partitions.
insert into orders values (1, 20131231);
insert into orders values (2, 20140102);
insert into orders values (3, 20140113);
commit;
1 rows inserted.
1 rows inserted.
1 rows inserted.
committed.
where lower(table_name) in (orders, orderlines);
12c partitioning
for data warehouses
4/11
Michelle Kolbe
We can see that the orders table has 3 partitions now and 2 were created
as INTERVAL partitions. Lets add data to the orderlines table now and
check the partitions.
insert into orderlines values (1, 2);
commit;
1 rows inserted.
committed.
Indexing
The child table now has one INTERVAL partition that is named the same
as the parent table.
Oracle gives us the functionality to manually split partitions into smaller
subsets. If we do this on an interval partition, then some of the partitions
will be converted to conventional partitions instead of interval partitions. In
12c, this split will also convert the child partitions to conventional partitions.
alter table orders
split partition for (20140104) at (20140104)
into (partition p20140101, partition p20140104);
table ORDERS altered.
select table_name, partition_name, high_value, interval
from user_tab_partitions
where lower(table_name) in (orders, orderlines);
Partial Index
Oracle 12c has a new feature called a Partial Index which allows the user
12c partitioning
for data warehouses
5/11
Michelle Kolbe
to create indexes that only span certain partitions, not all. This feature
works on Local and Global indexes and can complement the full indexing
strategy. The policy can be overwritten at any time. To explain this concept, I think that the Oracle chart below describes the situation the best.
As you can see from this charts example, the table was defined with 3
of the partitions having partial local and global indexes. The last partition
does not have any indexes.
Heres how this is implemented: First, create a partitioned table with
INDEXING OFF set on the whole table definition but then INDEXING ON
for each individual partition that you want indexes on.
12c partitioning
for data warehouses
6/11
Michelle Kolbe
Now create two LOCAL indexes; the first one is a partial index.
Now lets create two GLOBAL indexes, the first one being a partial index.
And lets check out how these are defined in the Index Partitions table.
And now lets query the indexes table for these indexes.
Both indexes are VALID and the first one is defined as Partial.
If we look at the number of segments created for these indexes,
we can see that the local indexes have differing amounts of segments.
select segment_name, segment_type, count(*)
from user_segments
where segment_name in (ORDERS_IDX1, ORDERS_IDX2, ORDERS_IDX3, ORDERS_IDX4)
group by segment_name, segment_type
order by 1;
As you can see from the results, an index partition was created on
the partial index for P_MAX even though we specified INDEXING OFF
however, its marked as UNUSABLE.
12c partitioning
for data warehouses
7/11
Michelle Kolbe
So by now you are probably asking, what does this mean when someone queries the whole dataset? Lets look at the explain plan for a query
against the global partial index: orders_idx3.
explain plan for select count(*) from orders where col3 = 3;
select * from table(dbms_xplan.display);
orders_range_part
2014 VALUES LESS
2015 VALUES LESS
2016 VALUES LESS
ADD
THAN to_date(01-01-2015, MM-DD-YYYY),
THAN to_date(01-01-2016, MM-DD-YYYY),
THAN to_date(01-01-2017, MM-DD-YYYY);
Notice that the index is used and also the orders table is queried with
TABLE ACCESS FULL for the one partition without the index.
Partition Maintenance
The syntax ALTER TABLE can be used for maintaining partitioned tables
similar to non-partitioned tables. 12c has added functionality to make this
table maintenance easier.
12c partitioning
for data warehouses
8/11
Michelle Kolbe
The specification of the partitions can also be done as a range or as a list
of values in that partition.
ALTER TABLE orders_range_part
MERGE PARTITIONS year_2010 to year_2013
INTO PARTITION historical_data_partition;
ALTER TABLE orders_range_part
MERGE PARTITIONS for (to_date(01-01-2010, MM-DD-YYYY)),
for (to_date(01-01-2011, MM-DD-YYYY)),
for (to_date(01-01-2012, MM-DD-YYYY)),
for (to_date(01-01-2013, MM-DD-YYYY)),
INTO PARTITION historical_data_partition;
12c partitioning
for data warehouses
9/11
Michelle Kolbe
to run daily at 2 am by default. The index can also be manually cleaned up
by running the above job, running ALTER INDEX REBUILD [PARTITION] or
ALTER INDEX [PARTITION] COALESCE CLEANUP.
An Example
Create a range partitioned table with 500 records in 5 partitions.
create table orders
(
order_number number
)
partition by range(order_number)
(
partition p1 values less than (100),
partition p2 values less than (200),
partition p3 values less than (300),
partition p4 values less than (400),
partition p5 values less than (500),
partition p_max values less than (MAXVALUE)
);
table ORDERS created.
insert /*+ APPEND*/ into orders
select level from dual
connect by level < 501;
commit;
500 rows inserted.
committed.
select count(*)
from orders;
COUNT(*)
---------500
Now create an index on this table. When the index is first created, it will
not have any orphaned records.
create index orders_idx on orders(order_number);
index ORDERS_IDX created.
select index_name, orphaned_entries
from user_indexes
where index_name = ORDERS_IDX;
INDEX_NAME
ORPHANED_ENTRIES
---------------------------------------------------------------------------------------------------------------------ORDERS_IDX
NO
Now we are going to truncate the partition. This statement runs super
fast and the index is still valid.
alter table orders truncate partition p1 update indexes;
table ORDERS altered.
select index_name, status, orphaned_entries
from user_indexes
where index_name = ORDERS_IDX;
INDEX_NAME
STATUS
ORPHANED_ENTRIES
---------------------------------------------------------------------------------------------------------------------ORDERS_IDX
VALID
YES
The index has orphans which can either be cleaned manually or cleaned
automatically with the SYS.PMO_DEFERRED_GIDX_MAINT_JOB that runs
12c partitioning
for data warehouses
10/11
Michelle Kolbe
by default at 2 AM daily. Lets manually clean it now.
exec dbms_part.cleanup_gidx();
anonymous block completed
select index_name, status, orphaned_entries
from user_indexes
where index_name = ORDERS_IDX;
INDEX_NAME
STATUS ORPHANED_ENTRIES
----------------------------------------------------------------------------------ORDERS_IDX
VALID
NO
Note that the cleanup_gidx procedure has some optional parameters.
Without the parameters, it runs on the entire database cleaning up the indexes. If we include the schema name, it will run on the schema level and
if we give it a schema and a table name, it will run for only the indexes on
that table.
An ALTER INDEX <index name> REBUILD PARTITION <partition name>
or ALTER INDEX <index name> COALESCE CLEANUP will also cleanup
orphaned rows.
There are a couple of best practices to note when using Online Partition
Move. Compression while DML is being performed is that the DML will
have an impact on compression efficiency. The best compression ratio will
come with the initial bulk move. Also, you should seek to reduce the number of concurrent DML operations that occur while doing a partition move
because it requires additional disk space and resources for journaling.
Reference Partitioning
When the Oracle 11g release included a new feature called Reference
12c partitioning
for data warehouses
11/11
Michelle Kolbe
Partitioning, it was a bit limited in its scope of operations. With 12c, three
new features have been added to help with creation and maintenance of
this partitioning scheme.
TRUNCATE CASCADE is now an option on the parent table that will automatically truncate the child partitions also. In 11g to achieve this functionality, separate TRUNCATE statements would need to be run on the
parent and the child tables.
Similarly to the TRUNCATE, CASCADE has been added to the partition
EXCHANGE feature. This will modify the child table when modifications
are made to the parent table.
Lastly, reference partitioning can now be used on parent tables that are
interval partitioned as mentioned above in Partitioning Strategies.
A few tips
If you want to change an existing partitioned table to interval partitioning, you can execute this command:
ALTER TABLE <table name> SET INTERVAL (numtoyminterval(1,MONTH));
References
Oracle Database 12c: Whats New in Partitioning?
Tutorial.
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/
Partitioning/12c_parti.html
Oracle Database 12c: Partitioning Improvements with Tom Kyte Video.
https://www.youtube.com/watch?v=W71G0H91n_k
Phillip Brown
www.e-dba.com
Defining Custom
Compliance
Rules using 12c
OEM Lifecycle
Management
Pack
The Lifecycle Management Pack in OEM has been around for some time
now and it covers a wealth of functionality. Its formation started at
around 10g but at that time the functionality was split across a number of
management packs and didnt go by the LCMP name. From 12c the packs
(Configuration Pack, Change Management, Provisioning) were amalgamated into the Lifecycle Management Pack and the functionality was
refined and significantly improved. The Lifecycle Management Pack can
take you from bare-metal provisioning to data change management within a schema, however the functionality I have been working with almost
exclusively for 12 months now is the area of security and compliance.
So this is a quick first attempt at getting the data into 12c OEM from
our targets, it looks ok, but there are some key problems. If we select a
target and have a look at the preview data brought back I can show you a
couple of issues.
With the first query, we are just bringing back too much information, we
need to be selective. We only really need a single column single value returned. This is important when we start to write the compliance checks,
they are much less flexible than configuration extensions so you want to
make your job as easy as possible here.
The final configuration extensions brings back nothing, which you may
think is ok, but its actually an issue. If your configuration extension is
bringing back nothing it means it cannot be evaluated, therefore it is
compliant. Regardless of what the configuration extension is, it ALWAYS
need to bring back a value.
So this is really why it is very very important to clarify the questions you
are asking prior to developing configuration extensions and subsequent
compliance rules. With that in mind our queries are updated to this:
SELECT
s1.target_guid
,
attrvalue AS info ,
s2.DATA_SOURCE_NAME ,
s2.VALUE
FROM
MGMT$CCS_DATA s2
,
MGMT$ECM_CURRENT_SNAPSHOTS s1gen1,
MGMT$TARGET s1
WHERE
(
s1gen1.TARGET_GUID
= s1.TARGET_GUID
AND s1gen1.ECM_SNAPSHOT_ID = s2.ECM_SNAPSHOT_ID (+)
AND s1.TARGET_TYPE
= oracle_database
AND s1gen1.SNAPSHOT_TYPE = ccs_c_OTECH_EXTENS_08100983F47B00C0E0531EDF
F56893FA
)
This query can now be used as the basis for ALL your compliance rules.
If you execute this query in SQLPlus you will see it will bring back all your
configuration extensions. What you can now do is for each compliance
rule add in the relevant data source name (ALIAS) which you defined in
each configuration extension. If you have a standard naming convention
and a single configuration extension this becomes a very quick process.
SELECT
s1.target_guid
,
attrvalue AS info ,
s2.DATA_SOURCE_NAME ,
s2.VALUE
FROM
MGMT$CCS_DATA s2
,
MGMT$ECM_CURRENT_SNAPSHOTS s1gen1,
MGMT$TARGET s1
s1gen1.TARGET_GUID
= s1.TARGET_GUID
s1gen1.ECM_SNAPSHOT_ID = s2.ECM_SNAPSHOT_ID (+)
s1.TARGET_TYPE
= oracle_database
s1gen1.SNAPSHOT_TYPE = ccs_c_QTECH_EXTENS_08100983F47B00C0E0531EDFF56893FA
s2.DATA_SOURCE_NAME
= OTECH_1
)
Cato Aune
www.sysco.no/en/stikkord/
middleware
Starting
WebLogic
Components
There are three important components in booting a WebLogic server
Node Manager
WebLogic Scripting Tool (WLST)
Shell scripts
Node Manager is a WebLogic Server utility that let you
Start
Shut down
Restart
Administration Server and Managed Server instances.
When you have decided on how you want to start WebLogic, stick with
that decision. Using different methods each time will give you some interesting mysteries to solve.
Shell scripts could be regular bash shell scripts for Linux or cmd/bat files
on Windows. Shell scripts will be used for init.d/xinit.d scripts in Linux to
start Node Manager and WebLogic on server boot or to create Windows
services in Windows. When running the configuration wizard, default
start scripts are generated and placed in $DOMAIN_HOME/bin
These scripts will be used later on.
This works well, but make sure to use nohup and put the process in the
background, or the server instance would stop when you log out.
$ nohup startWebLogic.sh &
Starting directly from Node Manager does not set the environment
variables mentioned above. It is possible to provide the same information
manually along with nmStart.
When starting a managed server from Admin Server via Node Manager,
several environment variables are defined:
JAVA_VENDOR, JAVA_HOME, JAVA_OPTIONS, SECURITY_POLICY, CLASSPATH, ADMIN_URL
Using WLST and Node Manager
To use WLST and Node Manager, the requirements are
Node Manager must be up and running
Connect to Node Manager using nmConnect
nmConnect(userConfigFile=nmUserFile, userKeyFile=nmKeyFile,
host=nmHost, port=nmPort, domainName=domain,
domainDir=domainPath, nmType=nmType)
Start Admin Server and Managed Servers using nmStart
nmStart(AdminServer)
nmStart(ms1)
Often you want some custom config for each server, like heap size (-Xms,
-Xmx) or more advanced like configure where to find the coherence
cache in a high availability setup.
Where to put you custom config depends on how WebLogic is started.
If you start from Admin Server via Node Manager, you could place the
config in Arguments on the Server Start tab in the Configuration for each
server. The config will be stored in config.xml and is visible from the Admin Console.
Scripts
wls.py - the actual WLST script that starts and stops WebLogic instances
Recommendations
It is recommended to always use Node Manager to start Admin Server
and managed servers
import sys
def startAdmin():
print Starting AdminServer
nmConnect(userConfigFile=nmUserFile, userKeyFile=nmKeyFile, host=nmHost,
port=nmPort, domainName=domain, domainDir=domainPath, nmType=nmType)
nmStart(AdminServer)
nmDisconnect()
return
def stopAdmin():
print Stopping AdminServer
connect(userConfigFile=wlsUserFile, userKeyFile=wlsKeyFile, url=adminUrl)
shutdown(AdminServer, force=true)
disconnect()
return
starts Node Manager as user oracle, while the script in the documentation starts Node Manager as root, which is not recommended.
Make sure to adjust the paths in the script to fit your environment.
The script must be made runnable (chmod 0755) and activated (chkconfig --add) before it will be used next time the server starts or stops.
#!/bin/sh
#
# nodemanager Oracle Weblogic NodeManager service
#
# chkconfig: 345 85 15
# description: Oracle Weblogic NodeManager service
### BEGIN INIT INFO
# Provides: nodemanager
# Required-Start: $network $local_fs
# Required-Stop:
# Should-Start:
# Should-Stop:
# Default-Start: 3 4 5
# Default-Stop: 0 1 2 6
# Short-Description: Oracle Weblogic NodeManager service.
# Description: Starts stops Oracle Weblogic NodeManager.
### END INIT INFO
. /etc/rc.d/init.d/functions
exit $RETVAL
stop() {
echo -n $Stopping $SERVICE_NAME:
OLDPID=`/usr/bin/pgrep -f $PROCESS_STRING`
if [ $OLDPID != ]; then
/bin/kill -TERM $OLDPID
else
start() {
echo -n $Starting $SERVICE_NAME:
/bin/su $DAEMON_USER -c $PROGRAM_START & RETVAL=$?
echo [ $RETVAL -eq 0 ] && touch $LOCKFILE
}
stop() {
echo -n $Stopping $SERVICE_NAME:
/bin/su $DAEMON_USER -c $PROGRAM_STOP & RETVAL=$?
[ $RETVAL -eq 0 ] && rm -f $LOCKFILE
}
restart() {
stop
sleep 10
start
}
case $1 in
start)
start
;;
stop)
stop
;;
restart|force-reload|reload)
restart
;;
*)
echo $Usage: $0 {start|stop|restart}
esac
exit 1
Goran Stankovski
www.limepoint.com
DevOps and
Continuous
Delivery for
Oracle
and cooperation. Implementing DevOps and Continuous Delivery methodologies in a traditional enterprise environment has many challenges.
One such challenge is that DevOps can mean many different things to
different people - there is no DevOps governing or certifying body, nor is
there one clear and concise definition. In many respects DevOps is about
attitudes and relationships, more than tooling and procedures - it is the
bringing closer of Development and Operations capabilities within an organisation. One thing is clear; the greater the cohesion of Development
and Operations, the greater the business value of delivered solutions and
capabilities.
Common objectives of DevOps include delivering capabilities with a
faster time to market (or speed-to-value), reducing the failure rate of new
releases, minimising the lead time between fixes, improving system quality, and achieving a faster mean time to recovery in the event of a new
release impacting the current system. Continuous Delivery is the method
by which such objectives can be met.
Continuous Delivery is the practice of software delivery through automation, providing the ability to rapidly, reliably and repeatedly deploy software capabilities to customers with minimal manual interaction, minimal
risk, and typically much higher quality.
With Continuous Delivery, we continuously deliver small, understandable
and reversible changes over time. Of course there is an overhead to this
Keep everything in Source Control. Adopting an Infrastructure-ascode approach will allow you to define your infrastructure and platform
through code constructs and deliver it through automation. Storing your
entire platform and application configuration in source control systems,
such as GIT, will allow you to deliver consistent platforms and applications to support your business initiatives.
Large numbers of small changes are superior to small numbers of large
changes. Smaller, digestible changes are easier managed, and easily
rolled back.
Adopt the Continuous Improvement approach. Improving the Continuous Delivery and Automation capability over time will allow you to incorporate feedback into the process to deliver prolonged value.
So what about DevOps and Continuous Delivery in an Oracle context?
Oracle is considered an Enterprise class software. In Continuous Delivery and DevOps circles, enterprise software is viewed by many to be diametrically opposed to its key principles and tenants. It is inflexible, hard
to work with, delivered on a long software release cycle, and as a result
is slow to catch up with industry trends and emerging standards.
This misalignment in worldview tends to have many DevOps and Continuous Delivery initiatives place enterprise software, such as Oracle, in the
too hard basket and leave it for a later project to worry about. Many
take into account in order to deliver this capability? What does this mean
with respect to Oracle Infrastructure, Virtualisation, Database, Middleware, and Applications technologies?
Subsequent articles in the series will focus on this aspect.
Adopting Continuous Delivery and DevOps within your enterprise requires a fundamental change to your technical project delivery capability, which will require maturity to progress it within the organisation. In
our opinion, DevOps and Continuous Delivery capability maturity can be
viewed in the following three distinct phases:
1.
2.
3.
The final phase in the DevOps maturity is bringing the capability into
business as usual (BAU). Considerations such as configuration change
management and drift detection are critical in this phase. This is the most
difficult phase to adapt, as there is typically a huge separation and disconnect between the Development and Operations BAU teams in many
organisations. Most of these issues are typically organisational and not
technical in nature. Trust plays a key part. How can BAU trust what is
delivered? How can we bridge this divide?
Over the next 4 issues of this magazine, we will be delivering articles that
focus on each of the phases, and share our experience and recommendations in delivering such DevOps and Continuous Delivery capabilities to
organisations that leverage Oracle technology.
Continuous delivery
automation for Oracle
Infrastructure. Databases. Middleware. Applications.
Lonneke Dikmans
www.eproseed.com
Choosing the
right Mobile
Architecture
choosing
the right mobile architecture
2/9
Lonneke Dikmans
Deciding you need to create a mobile app is a no brainer these days. However, once this decision is made the next one pops up immediately: what
architecture should I chose? Native? Web? Hybrid? In this article you will
learn about the differences between these three architectures and the
impact the mobile architecture has on your service architecture, on security and last but not least on scalability. Three use cases are described to
guide you through the decision process.
do that, you have access to device specific capabilities like the camera.
Web applications are the opposite of that; they reside on the server and
are accessed using a browser. This makes them independent of device
type and OS version. You are limited with respect to the features you can
use by the capabilities of the browser and HTML5, but can use any tool
that supports web applications.
Hybrid apps are a mix between Native apps and Web apps. You can
download them from the App store, or Google Play etc. Some parts are
programmed in the native app language; some parts are residing on the
server and are shown using embedded web views. You can access both
Native apps and Hybrid apps online and offline, as long as everything you
need is available offline. Web apps you can access online only.
The table below summarizes the most important features of the three
architectures.
Native
App
Resides
on
the
device
Download
from
App
store
Native
views,
depending
on
device
and
OS
version
Access
to
all
device
features
Native apps are specifically built for a specific device and depend on the
specific operation system (OS) version. For example, you can build an
iPhone app for iOS 8.2 using XCode to program Objective C. When you
Web
App
Resides
on
the
server
URL
Browser,
depending
on
Browser
version
Limited
functionality,
depending
on
HTML5
Online
Hybrid
App
Device
and
server
Download
from
App
store
Native
views
and
Embedded
Browser
Access
to
all
device
features
Online
and
Offline
choosing
the right mobile architecture
3/9
Lonneke Dikmans
As you can imagine, maintaining native apps for different devices and
different versions of the OS is expensive and cumbersome. That is why
Oracle Mobile Application Framework is very powerful: you can build
your application once and deploy it to different devices. Apart from the
device type, it allows you to choose the size of the device and the OS version. You can declaratively define icons and other device specific features
for the entire app. At deploy time these will be generated into your native
code.
Of course this does not take away the burden of deployment or releasing
for different platforms, but at least you can work from the same codebase.
choosing
the right mobile architecture
4/9
Lonneke Dikmans
Protocol
Granularity
When you think about the granularity of your services, you have to take
the form factor (how big is the screen on my device and how do I access
it or input data) into account.
When you design for an iPad, you have almost as much real estate as on
a laptop.
SOAP
Based
on
operations
Described
by
WSDL
Coarse
grained
Message
security
plus
regular
web
application
security
(in
case
of
HTTP)
Platform
and
language
agnostic
REST
/
XML
Based
on
objects
Described
by
WADL
Fine
grained
(micro-services)
regular
web
application
security
REST/JSON
Based
on
objects
Described
by
WADL
Fine
grained
(micro-services)
regular
web
application
security
Platform
agnostic,
programming
language
specific
(JavaScript)
However, when you are designing an app for a mobile phone or smart
watch, this changes rapidly. Typically, a mobile phone is chatty in terms of
service communication: it calls small services often, because there is less
room for data on the screen and because data is often paid by volume.
choosing
the right mobile architecture
5/9
Lonneke Dikmans
The following items determine the granularity of the services you need:
Form factor. For smaller screens (both in device-size and resolution)
you want small services to avoid using to much bandwidth or memory
On route versus stationary. When you are on route (walking to your
car) you spend less time on a transaction compared to when you are
stationary (sitting at home on the couch)
Hands free or not. When you are driving, you need voice input. When
you are on the couch you can type
Online versus Offline. If data needs to be cached you need bigger
services to make sure you fetch as much as you can before you lose the
connection.
Because in a mobile web app the code is residing on the server, it is relatively easy to reuse the services you already built in the backend and filter
the data in the web application, before they get sent over the wire. You
can create a responsive application that behaves differently based on the
device that the browser is running on.
When you are building a native app you probably want to filter the data
on the server, building a specific layer of services in the backend (presentation services). This avoids unnecessary delays and sending over too
much data to your device. The exception to this guideline is when you
need to cache a lot of data on your mobile device; in that case you probably want to call coarser grained services.
Note that in both cases you can reuse the original services in your backend and you should make sure they contain the business logic, to prevent
copying logic to multiple places
choosing
the right mobile architecture
6/9
Lonneke Dikmans
Security considerations
When you decide to create a mobile app using native or hybrid
technology, there are a number of things you need to take into account.
This includes:
Network security: encryption;
Mobile Device Management (MDM) versus Mobile Application
Management (MAM);
Service side management: authentication and authorization of your
services in the back end and management of the services or API.
Lets take a look at mobile device management, mobile application
management and service management:
MDM
Secure
device
password
VPN
between
device
and
server
Wipe
entire
device
Track
device
Native
and
hybrid
Dedicated
devices
MAM
Secure
container
password
Secure
tunnel
between
app
container
and
server
Wipe
application
Track
application
Native
and
hybrid
BYOD
Service
management
Secure
service
Transport
level
security
using
SSL
(for
example)
Protection
against
DOS
Track
service
use
Web
App
BYOD
Oracle offers mobile security suite to take care of your security considerations for Enterprise grade mobile apps by containerizing them. On top
of that you can secure and manage your services (using API gateway)
and Oracle Identity and Access Management feature. This is shown in the
figure below.
Scalability
The introduction of mobile devices has lead to a huge increase in load in
services that used to be very stable. Examples of this are:
Bank apps to check your balance. In the past people would get a bank
statement periodically; the bank was in control of the information.
Then Internet banking and ATMs entered the arena, allowing people to
check their balance when getting cash or when behind the computer at
home. With the introduction of mobile banking, customers can check
their balance before they make any major or minor purchase. This has
increased the load on the backend banking systems greatly.
Travel apps to check flight times. A similar effect has happened with
train and plane schedules. People always carry their mobile device and
check times frequently when traveling. This has increased the load on
the backend systems.
choosing
the right mobile architecture
7/9
Lonneke Dikmans
There are several solutions to handle the increase in load:
Cache the response. Data can be cached on the device or on the service bus or API manager. This decreases the load if the same customer
checks the same data repetitively.
Load balance the service in the backend. The servers in the backend
can be scaled out, so the additional load can be balanced over multiple
servers.
Smaller services (less data) for mobile use. Depending on the underlying backend architecture, creating smaller micro-services might decrease the load on the system because there is no excess of data being
sent to devices.
Use cases
In this case, the main reason to choose a native app, was the need to be
able to work offline and cache data. Field engineers work in areas where
there is sometimes no connection to the Internet. Because the device is a
special purpose device, mobile device management was sufficient:
If a field engineer leaves the job, or looses the device the entire device
is wiped clean.
There are no other apps on the device than the ones installed on their
by the utility company
Now that you have seen some considerations you have to take into account when you decide to build a mobile app, lets take a look at some
practical examples.
Illustration 7. Choosing the right mobile architecture: Field engineer fixing a meter
choosing
the right mobile architecture
8/9
Lonneke Dikmans
Judge reading court files
Illustration 8. Choosing the right mobile architecture: Judge reading case files
Illustration 9. Choosing the right mobile architecture: Engineer looking for a temp job
When people work from home, they are mostly working online. For
judges, security demands are high and theyd like to read files on a tablet. The solution is an hybrid app that works the similar to the browser
version for the desktop. The functionality is residing on the server. The
organization has chosen Mobile Device Management, because they hand
out the devices. There is no BYOD device policy in place. If there had been
a BYOD policy, MAM would have been better.
Specific presentation services are created for specific channels. There are
special services for mobile (smart phones). For the tablet, the services for
the Browser are reused.
In another case, field engineers who travel the world are looking for
their next job. They have time when at the airport, in a cab or on a train.
Usually they are connected (by Wi-Fi) and security demands for the application are relatively low. Jobs are posted and people can post resumes
on the site. In this case a web-app is chosen, using OAuth. Presentation
services were created to cater for the different form factors and to make
sure that the app is fast enough.
choosing
the right mobile architecture
9/9
Lonneke Dikmans
resides: on the device or on the server. Depending on the device features
you need access to and the connectivity demands, one of the three is the
best fit.
Because we are talking about enterprise grade apps, security is an important feature. When choosing native or hybrid apps, you need to think
about whether you want to use Mobile Application Management or
Mobile Device Management. In general Mobile Application Management
is more fine-grained and more user friendly. It is very suitable for BYOD
situations. The mobile security suite integrates with Oracle Identity and
Access Management, so that your mobile app can truly be a part of your
enterprise solution. Apart from Mobile Application or Device Management, you need to manage the APIs that your mobile app calls; often the
APIs are used by multiple (even third party) apps and web applications
and you dont want to rely on the diligence of the app programmer for
the security of the services and APIs.
When building mobile applications, the services in your organization are
impacted too. There are a number of design choices you have to make:
Stick with SOAP services or move to REST services;
Building an extra layer of services to cater for specific mobile app
needs (caching, smaller micro-services etc.) or solve this in the app
itself;
Scale out or cache to cater for the increase in load.
Using tools to help solve these issues is one part of the equation. However, making the right architectural decisions is the critical success factor that determines whether your company will be successful in taking
advantage of the possibilities that arise from modern technologies.
Biju Thomas
www.oneneck.com
Flashback Empowering
Power Users!
Introduction to Flashback
Oracle introduced the automatic undo management in version 9i of the
database. Ever since the flashback operations are available to retrieve
the pre-modification state of the data in the database. Figure 1 shows the
database version and the flashback feature introduced.
As you can see in Figure 1, there are many flashback features in Oracle Database. Few require DBA privileges, few have configuration requirement
before you can start using, and few are enabled by default and available
for non-administrators. The flashback features that help to query and
restore data as it existed at a prior timestamp on individual tables are
available (or should be made available) to the developers and application
Let us review the flashback features for power users (and developers)
and how to use them.
SYSTIMESTAMP
---------------------------------------------------07-DEC-14 03.31.22.982933 PM -05:00
The DBMS_FLASHBACK pl/sql package was introduced in Oracle9i Release 1 (yes, almost 15 years ago!!), when the automatic undo management was introduced in the Oracle Database. This is one of the powerful
and easy to use package to go back in time, and still available in Oracle
Database 12c.
The back to the future time point in the database can be enabled by
using the DBMS_FLASHBACK.ENABLE_AT_TIME or DBMS_FLASHBACK.
ENABLE_AT_SYSTEM_CHANGE_NUMBER procedure. Both procedures
enable flashback for the entire session. Once enabled, the queries in the
session will be as of the SCN or timestamp enabled. One big advantage of
using this package is to run pl/sql programs on data in the past.
The following code shows setting the flashback 4 hours back, and running
queries to retrieve some data accidentally deleted from the EMP table.
LOOP
FETCH frompast INTO pastrow;
EXIT WHEN frompast%NOTFOUND;
insert into hr.employees values pastrow;
END LOOP;
END;
/
PL/SQL procedure successfully completed.
COUNT(*)
---------1
SQL>
SQL> commit;
So, how do you retreive data using this method? The answer is PL/SQL.
Let me show you with an example how to retrieve the deleted employee
row using pl/sql. In the code, we open the cursor after enabling flashback, but disable flashback before start fetching. COMMIT is not included
in the pl/sql code. So, after execution, we confirm if the row is inserted,
and commit the record.
DECLARE
cursor frompast is
SELECT * FROM hr.employees
where employee_id =106;
pastrow hr.employees%rowtype;
BEGIN
DBMS_FLASHBACK.ENABLE_AT_TIME(TO_TIMESTAMP(07-DEC-14 11:30,DD-MON-YY HH24:MI));
OPEN frompast;
DBMS_FLASHBACK.DISABLE;
Commit complete.
SQL>
This is too much work to retrieve data. Enabling flashback for the session
is good to find information and run various queries. To actually restore
back the changes we should look into other flashback methods.
Flashback Query
Flashback query was introduced in Oracle9i Release 2. This is the best and
fastest way to save or retrieve information as it existed in the past. Flashback query is enabled by using the AS OF TIMESTAMP or AS OF SCN
clause in the SELECT statement.
If you do not want to restore the row directly into the table, you can save
the data in a table. In fact, it is a good practice to save the data in a table,
as soon as you know that some unintended data update or delete performed.
SQL> create table hr.employees_restore pctfree 0 nologging as
2 select * from hr.employees
3 as of timestamp
TO_TIMESTAMP(07-DEC-14 11:30,DD-MON-YY HH24:MI)
4 where employee_id = 106;
Table created.
SQL>
SQL>
The required row is still available, to get it back is easier, because the
session is not running in flashback mode, just the subquery is in flashback
mode.
SQL> insert into hr.employees
2 select * from hr.employees
3 as of timestamp
TO_TIMESTAMP(07-DEC-14 11:30,DD-MON-YY HH24:MI)
4 where employee_id = 106;
And remember, if you found the data still available for retrieve and want
to replace the entire table (by doing a truncate and insert) or want to
disable a constraint, do not attempt to do that before saving the data
to a temporary location. If you perform any DDL on the table, flashback
queries will not work anymore on that table.
Here is an experience. You can see here the DBA being proactive, tried to
disable the primary key before doing the insert, and
SQL> select count(*) from bt1;
1 row inserted.
COUNT(*)
---------70
SQL>
In such situations, always save the data in a temporary table, do the DDL
on the table, and insert from the temporary table. The following operations on the table prevent flashback operations.
moving or truncating
adding or dropping a constraint
adding a table to a cluster
modifying or dropping a column
adding, dropping, merging, splitting, coalescing, or truncating a
partition or subpartition (except adding a range partition).
To use Flashback operations, you require FLASHBACK and SELECT
(or READ) privilege on the table.
When there are many tables or if the table is big, it might be easier for
you to save the data as export dump file, as soon as you hear about an
unintended data manipulation. You can use of the expdp parameters
FLASHBACK_SCN or FLASHBACK_TIME to perform the export as of a
specific timestamp.
These parameters along with QUERY parameter helps you to further refine and filter the data exported. For example, to export the table to save
the deleted employee 106, we can use the following parameters.
dumpfile=exprestore.dmp
logfile=emprestore.log
tables=hr.employees
flashback_time=to_timestamp(07-DEC-14 16:45,DD-MON-YY HH24:MI)
query=where employee_id = 106
Flashback Table
If you are very certain that the entire table must be reverted back to a
state in the past, you can use the Flashback Table feature. For the flashback table to work, the row movement must be enabled. The following
example shows how to rollback a table to prior time.
SQL> FLASHBACK TABLE bt1 TO TIMESTAMP to_timestamp(07-DEC-14 16:45,DD-MON-YY
HH24:MI);
ERROR at line 1:
ORA-08189: cannot flashback the table because row movement is not enabled
SQL> alter table bt1 enable row movement;
Flashback in Export
69 OTech Magazine #6 winter 2014
Table altered.
is relevant here, the UNDO_RETENTION. This parameter provides a guidance to the database on how long the committed undo should be kept in
the undo segments.
Since flashback operations are very useful feature of the database, the
database tries to keep the undo records as long as possible, till there is
not enough room in the undo tablespace, even past the value specified in
seconds for UNDO_RETENTION (default is 900 seconds). The retention
behavior depends on the tablespace data file AUTOEXTENSIBLE property.
If the undo tablespace is autoextend, database tries to honor the time
specified in UNDO_RETENTION. If this duration is smaller than the longest running query in the database, undo is tuned to accommodate the
longest running query. When free space is low, instead of overwriting
unexpired undo (committed and within the retention period) record, the
tablespace auto-extends.
When any DML is run to insert, update, delete data from table, the undo
record is written to the rollback (undo) segment. This record is mainly
used by Oracle database to provide a read consistent view of the data. This
data is also used to rollback the transaction and to run flashback queries.
The duration of undo data available for the database depends on the size
of the undo tablespace. Once the transaction is committed, the undo
data is not required for the database operation, but is used by long running queries and flashback operations. There is one other parameter that
Oracle Undo Advisor provides the size required for undo tablespace
to keep specified amount of undo retained. Undo advisor is available
in DBMS_ADVISOR pl/sql package, but I prefer using Oracle Enterprise
Conclusion
As a developer or Power user, Oracle gives you the opportunity to recover from unintended mistakes in data changes. The requirement is
automatic undo management, which is the default since Oracle 11g. It is
also important to size the undo tablespace appropriately to retain undo
data for certain period of time. Since Oracle 10g, when a table is dropped,
it is kept in the recyclebin, and can be restored. Recyclebin feature is also
default since 10g. Discuss with your DBA, what you can do and the undo
retention settings. If you are a DBA reading this article, it is important for
you to show the developer or power user what they can do to query and
save data from the past might save you an unnecessary restore of the
database or table!
Lucas Jellema
www.amis.nl
The Rapid
Reaction Force
real time
business
monitoring
Key characteristics
OEP is a light weight component a with a fairly small footprint (one of the
reasons it can be used in the embedded use case for example on a device
the size of a Raspberry Pi). It holds relevant data in memory (or on grid);
most of the processing does not require I/O operation, allowing OEP to
respond very fast.
Interactions
Messages are typically read from JMSfor example, a queue in WebLogic or the SOA Suite Event Delivery Network (EDN). Tables can also
be used as an event source, and through custom adapters, we can consume messages from virtually any source, including files, sockets, NoSQL
database and JMX, Oracle Messaging Cloud Service, Web Sockets, RSS
feeds and other HTTP streams. The OEP server provides an HTTP Pub/Sub
event channel based on the Bayeux protocol; this allows OEP to consume
messages pushed from a Bayeux-compliant server.
It is quite common that the output from an OEP application is fed into
another OEP application to perform the next iteration of refinement,
filtering and upgrading to an even coarser grained event with even more
business value. An example of this is the combination of an embedded
OEP that processes high frequency local sensor signals into less frequent
aggregate values covering larger areas that are sent to centralized OEP
processors for pattern analysis and aggregation across locations.
Usage Scenario
The core strengths of OEP seem to be:
aggregation of low level signals
cross instance (or conversation) monitor
real time interpreter
in a fast, lean, continuous and decoupled fashion that can easily be integrated with existing applications and engines.
In addition to all obvious fast data use cases high volume of messages
that requires real time processing, such as social media, IoT, network
packets and stock tick feeds there are use cases within closer reach for
most of us. Consider OEP the monitor that can look and analyze across
threads and sessions, at transactional or non-transactional actions, at
middle tier (JMS, HTTP) and database (HTTP push/pull or JDBC/SQL pull)
in a decoupled way that hardly impacts the operations you would like to
keep track of.
CQL was inspired by, derived from and overlaps with SQL. The widespread knowledge of SQL can be leveraged when programming event
processing queries. CQL is allows us to combine event streams and relational data sources in a single query (for example, to join historical and
reference data with the live event feed).
As an aside: The Match_Recognize operator introduced in Oracle Database 12c to perform pattern matching on relational data sets makes use
of the CQL syntax to construct the pattern query.
CQL queries select from an event channel, often with some range or time
window applied, using a where clause to filter on the events that are returned by the query. The select clause refers to event properties and uses
functions and operators to derive results. Note that multiple channels
and other data sources can be joined yes: even outer joined together
Technology Overview
The Oracle Event Processor is part of the SOA Suite license. However, it
is not part of the SOA Suite SCA containerit does not even run in the
same WebLogic Server as the SOA Suite does. It runs on its own streamlined, lightweight serverwhich is POJO based, founded on Spring DM
and an OSGi-based framework to manage services. This server comes
with Jetty, an HTTP container for running servlets, and support for JMS
and JDBC. It has caching and clustering facilitiesoptionally backed by an
Oracle Coherence grid. OEP can handle thousands of concurrent queries
and process hundreds of thousands of events per second. The average
message latency can be under 1 ms.
tions of this class will derive car exit events as well and will generate
events covering five different car parks. This generator is used during
development. When the event processing has been developed and verified, the generator can be replaced by an inbound adapter that reads the
events from JMS, an HTTP channel or some other source.
The output from the OEP application can be reported through outbound
adapters to external receivers, such as a JMS destination, an RMI client, a
Socket endpoint or the Event Delivery Network. During development and
for testing purposes, it is frequently convenient to work with an internal
event receiver. A simple Java Class that implements the StreamSink is be
used to receive the outcomes from the event processor and writes them
to the console. Java Class CarParkReporter implements that interface and
writes simple logging lines based on the events it receives
group by carparkIdentifier)
This query produces an update on the current number of cars per car
park. CQL will publish a result in this case whenever the previous result is
superseded by a new one i.e. with every new car that enters the parking
lot and every car that leaves. Note: the value of property entryOrExit is -1
for an exit and +1 for an entry of a car.
In this section, we determine for cars that leave how long they have
stayed in the car park; we could use that to automatically calculate the
parking fees. We next extend our summary with the average stay duration of cars. We look for cars that have out-stayed their welcome: our car
parks are short stay only, intended for parking up to 36 hours. Cars that
have been in the car park for more than 48 hours become candidate for
towing.
Such an update of the car count is not really required for every car that
enters. We may want to settle for a summary update every five seconds.
This is easily accomplished by rewriting the from clause as:
Correlate events
from
Here we have specified that the results from the query are to be calculated over one day worth of [input] events and should be produced once
every five seconds.
One thing OEP excels at is correlating events. It can find patterns across
events that arrive at various points in time. In this example, we will have
OEP associate the arrival and exit events for the same car. These events
together constitute the car stay for which we could calculate the parking fee. By taking all these car stay events into consideration, we can
have OEP calculate the average stay duration per car park.
The key CQL operator we leverage to correlate events into meaningful
patterns is called MATCH_RECOGNIZE. The query used in the carStay-
StreamExplorer
Stream Explorer is a tool first showcased at Oracle OpenWorld 2014 targeted at the Line of Business User (the non-technical IT consumer).
This tool provides a visual, declarative, browser based wrapper around
Oracle Event Processor and Oracle BAM. With Stream Explorer (assumed
to be available close to the turn of the year) it is every easy to create explorations and dashboard on live [streams of]data reporting in real time
on patterns, correlations, aggregations and deviations. As Oracle puts it:
The business user defines Streams for the input. Such a Stream can be a
(CSV) file, a HTTP Subscriber, EDN event, a JMS destination, a Database
Table or a REST service. These streams carry shape instances. A shape is
a record (event) definition that consists of the properties and their types.
The Streams can perhaps be preconfigured for the LoB user by the tech
savvy IT colleague. The business user can then define Explorations on the
Streams, that are based on predefined templates as shown in the next
figure.,
Summary
Real time findings insight, alert, action - based on vast and possibly fast
data from a variety of sources published to various types of channels
for onwards processing is in a nutshell what Oracle Event Processor can
do for us. By constantly scanning streams of events arriving at unpredictable moments, OEP is able to look out for abnormal conditions, meaningful patterns, relevant aggregates and even important event absences.
Compared to the 11g stack, development for OEP 12c is much easier
thanks to the integration in JDeveloper. The support for the Event Delivery Network in the SOA Suite means that it is easier still to implement
EDA (Event Driven Architecture) with SOA Suite or at least make the
service architecture more event-enabled. Finally, the new StreamExplorer
Resources
The source code for the Saibot Airport Car Park Management example
discussed in this article can be downloaded from GitHub.
Also in GitHub are the sources for the CreditCard Theft case.
A presentation with a quick overview of event processing and how OEP
can be used (presented at Oracle OpenWorld 2014) is available here.
Example of using the Oracle Database 12c Match_Recognize clause to do
CQL-like pattern matching on relational data.
otech partner:
For over 22 years, AMIS has been the leading Oracle technology partner in the Netherlands. AMIS consultants are involved in major Oracle implementations in the Netherlands and in various ground breaking projects worldwide. Based on a solid business
case, we bring the latest technology into actual production for customers .
AMISs reputation for technological excellence is illustrated by the AMIS technology
blog. This blog services over 5.,000 visitors daily and is in the top 3 of the worlds most
visited Oracle technology blogs. There are 3 Oracle ACEs in the AMIS team, including 1
ACE Director, who make regular contributions to the worldwide Oracle community.
AMIS is an Oracle Platinum Partner and was selected as the EMEA Oracle Middleware
partner of the year award in 2014, and in Holland in 2013 and 2011.
AMIS delivers expertise worldwide. Our experts are often asked to:
- Advise on fundamental architectural decisions
- Advise on license-upgrade paths
- Share our knowledge with your Oracle team
- Give you a headstart when you start deploying Oracle
- Optimize Oracle infrastructures for performance
Click here for
- Migrate mission-critical Oracle databases to cloud based
our
review on
infrastructures
OOW 2014
- Bring crashed Oracle production systems back on-line
- Deliver a masterclass
Patrick Barel
Lucas Jellema
www.amis.nl
www.amis.nl
Amis
Edisonbaan 15
3439 MN Nieuwegein
+31 (0) 30 601 6000
info@amis.nl
www.amis.nl
www.twitter.com/AMIS_Services
https://www.facebook.com/AMIS.Services?ref=hl
specializations: Oracle ADF 11g, Oracle Application Grid 11g
Oracle BPM 11g, Oracle Database 11g, Oracle Enterprise
Manager 12c, Oracle SOA Suite 11g, Oracle RAC 11g, Oracle
Weblogic Server 12c, Exalogic
Mahir M Quluzade
www.mahir-quluzade.com
Oracle
Database
In-Memory
(Part I)
Introduction
Usually we are using analytics on Data Warehouses (DWH) which is
stored Online Transactional Processing (OLTP) systems data. Analytics is
complex queries running on very large tables of DWH. But DWHs is not
running in real-time as OLTP. Oracle Database In-Memory (IM) can help
run analytics and reports in real-time OLTP databases. Also Oracle Database In-Memory supports both DWH and mixed workload OLTP databases.
Figure 1: Data Block and Row Piece format
Database Block
Oracle Database has stored data as row format in other words as multicolumn records in data blocks on the disk (Figure 1). In a row format database, each new transaction or record stored as a new row in the table.
A row format is ideal format for OLTP databases, as it allows quick access
to all of the columns in a record since all of the data for a given record are
kept together in memory and on disk, also incredibly efficient for processing DML.
Instance Changes
An Oracle Instance contains Memory and set of Background processes.
Memory divided in two different areas: System Global Area (SGA) and
Program Global Area. Oracle creates server processes to handle the
requests of user processes connected to the instance. One of the important tasks of Sever Processes: read data blocks of objects from data files
into the database buffer cache. By default, Oracle store data in the data-
INMEMORY_SIZE sets the size of the In-Memory Column Store (IM column
store) on a database instance and the default value is 0, which means
that the IM column store is not used, so In-Memory feature is not automatically enabled. We need change INMEMORY_SIZE initialization parameter to non-zero amount for enable IM Column store. If you want this
In-Memory Area
Database mounted.
Database opened.
318767104 bytes
In-Memory Population
System altered.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 1392508928 bytes
Fixed Size
2924304 bytes
Variable Size
1040187632 bytes
Database Buffers
16777216 bytes
Redo Buffers
13852672 bytes
However, you can enable IM Column Store for tablespaces. So, all tables
and materialized views in the tablespace are automatically enabled for the
IM column store. Also you can enable IM Column store for all objects and
tablespace after creation with ALTER DDL command (Code Listing 4).
Memory Compression (inmemory_memcompress) clause specified compression method for data stored in the IM Column store. The default is
NOMEMCOMPRESS, means no compresses in-memory data. To instruct
the database to compress in-memory data, specify MEMCOMPRESS FOR
followed by one of the following methods (Table 1):
Table 1: Compression methods
Compression
(inmemory_
memcompress)
Description
DML
QUERYLOW
QUERYHIGH
CAPACITYLOW
CAPACITYHIGH
Description
NONE
LOW
MEDIUM
TABLE_NAME INMEMORY
INMEMORY_COMPRESS
------------- ------------ -------------------TBIM1 ENABLED
FOR CAPACITY HIGH
TB1 DISABLED
MVIM_TB1 ENABLED FOR CAPACITY LOW
TBIM2 ENABLED FOR QUERY LOW
TBIM3 ENABLED FOR QUERY HIGH NONE
HIGH
5 rows selected.
CRITICAL
Table altered.
SQL> alter table MVIM_TB1 inmemory memcompress for capacity low;
Table altered.
SQL> alter table TBIM2 inmemory memcompress for query low;
Table altered.
SQL> create table TBIM3 (n number,v varchar2(1))
2 inmemory memcompress for query high;
Table created.
SQL> select table_name, inmemory, inmemory_compression from user_tables;
Objects that are smaller than 64KB are not populated into memory
Also you can use IM column store on Logical Standby database but the
IM Column Store cannot be used on an Active Data Guard standby instance in the current release.
6 rows selected.
DISTRIBUTE (inmemory_distribute) and DUPLICATE (inmemory_duplicate) clauses using only with Oracle Real Application Clusers (RAC). You
can read about this clause in next part of this article series.
Restrictions on IM
IM Column Store has some restrictions:
Index Organized Tables (IOTs) and Clustered Tables cannot be populated into IM Column Store.
LONGS (deprecated since Oracle Database 8i) and Out of line LOBs
data types are also not supported in the IM Column store
Next Part
In next part of this article series you will read, In-Memory scans, joins and
In-Memory with RAC also Oracle SQL Optimizer with In-Memory.
Time Series
Forecasting
in SQL
For demonstration purposes I insert data for two items with seasonal
variations:
insert into sales values (Snowchain, date 2011-01-01, 79);
insert into sales values (Snowchain, date 2011-02-01, 133);
insert into sales values (Snowchain, date 2011-03-01, 24);
...
The inline view mths creates 48 rows with the months from 2011-01-01 to
2014-12-01 the 3 years I have sales data for plus the 4th year I want to
forecast.
By doing a partition outer join on the sales table, I get 48 rows numbered
with ts=1..48 for each item having QTY NULL for the last 12 months:
ITEM
TS MTH
YR MTHNO QTY
---------- --- ------- ----- ----- ---Snowchain
1 2011-01 2011
1 79
Snowchain
2 2011-02 2011
2 133
Snowchain
3 2011-03 2011
3 24
...
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
...
Snowchain
Snowchain
Sunshade
Sunshade
Sunshade
...
34
35
36
37
38
39
2013-10
2013-11
2013-12
2014-01
2014-02
2014-03
2013
2013
2013
2014
2014
2014
10
1
11 73
12 160
1
2
3
47
48
1
2
3
2014-11
2014-12
2011-01
2011-02
2011-03
2014
2014
2011
2011
2011
11
12
1
2
3
4
6
32
TS
--1
2
MTH
YR MTHNO QTY
CMA
------- ----- ----- ---- ------2011-01 2011
1 79
2011-02 2011
2 133
6
7
8
9
10
2011-06
2011-07
2011-08
2011-09
2011-10
2011
2011
2011
2011
2011
6
7
8
9
10
29
30
31
32
2013-05
2013-06
2013-07
2013-08
2013
2013
2013
2013
5
6
7
8
0
0
0
1
4
30.458
36.500
39.917
40.208
0 56.250
0 58.083
0
1
Seasonality
with s1 as (...), s2 as (...)
select s2.*
, nvl(avg(
case qty when 0 then 0.0001 else qty end / nullif(cma,0)
) over (
partition by item, mthno
),0) s -- seasonality
from s2
order by item, ts;
The qty divided by the cma gives the factor for how much the particular month sells compared to the average month. The model will fail for
months with qty=0 as that will mean some of the next steps get division
by zero or get wrong results due to multiplying with zero. Therefore any
0 sale I change to a small number.
Seasonality is then the average of this factor for each month. That is calculated by an analytic avg partitioned by item and month number. What
this means is, that the case expression finds the factor for the months
of July 2011 and July 2012 for a given item, and the portioning by month
number then finds the average of those two, and that average (column
alias s) will be in all 4 July rows for that item. Or in other words, seasonality will be calculated for the 12 months and be repeated in all 4 years:
ITEM
---------...
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
TS MTH
YR MTHNO QTY
CMA
S
--- ------- ----- ----- ---- ------- ------9
10
11
12
13
14
15
16
17
18
19
20
21
22
2011-09
2011-10
2011-11
2011-12
2012-01
2012-02
2012-03
2012-04
2012-05
2012-06
2012-07
2012-08
2012-09
2012-10
2011
2011
2011
2011
2012
2012
2012
2012
2012
2012
2012
2012
2012
2012
9
1 39.917 .0125
10
4 40.208 .0774
11 15 40.250 .3435
12 74 40.250 2.5094
1 148 40.250 3.3824
2 209 40.292 4.8771
3 30 40.292 .7606
4
2 40.208 .0249
5
0 40.250 .0000
6
0 44.417 .0000
7
0 49.292 .0000
8
1 51.667 .0097
9
0 53.750 .0125
10
3 54.167 .0774
23
24
25
26
27
28
2012-11
2012-12
2013-01
2013-02
2013-03
2013-04
2012
2012
2013
2013
2013
2013
11 17 54.083 .3435
12 172 54.083 2.5094
1 167 54.083 3.3824
2 247 54.083 4.8771
3 42 54.083 .7606
4
0 54.000 .0249
Deseasonalized quantity
with s1 as (...), s2 as (...), s3 as (...)
select s3.*
, case when ts <= 36 then
nvl(case qty when 0 then 0.0001 else qty end / nullif(s,0), 0)
end des -- deseasonalized
from s3
order by item, ts;
TS
--1
2
3
22
23
24
25
MTH
QTY
CMA
S
DES
------- ---- ------- ------- -------2011-01 79
3.3824 23.356
2011-02 133
4.8771 27.270
2011-03 24
.7606 31.555
2012-10
3 54.167 .0774
2012-11 17 54.083 .3435
2012-12 172 54.083 2.5094
2013-01 167 54.083 3.3824
38.743
49.490
68.542
49.373
Snowchain
...
50.645
Trend (regression)
Having the deseasonalized quantity I use regression to calculate a trend:
with s1 as (...), s2 as (...), s3 as (...), s4 as (...)
select s4.*
, regr_intercept(des,ts) over (partition by item)
+ ts*regr_slope(des,ts) over (partition by item) t -- trend
from s4
order by item, ts;
TS
--1
2
3
34
35
36
37
38
39
MTH
QTY
CMA
S
DES
T
------- ---- ------- ------- -------- ------2011-01 79
3.3824 23.356 32.163
2011-02 133
4.8771 27.270 33.096
2011-03 24
.7606 31.555 34.030
2013-10
1
2013-11 73
2013-12 160
2014-01
2014-02
2014-03
Reseasonalize (forecast)
Having calculated the trend line, I can re-apply the seasonality factor:
with s1 as (...), s2 as (...), s3 as (...), s4 as (...), s5 as (...)
select s5.*
, t * s forecast --reseasonalized
from s5
order by item, ts;
ITEM
---------Snowchain
Snowchain
...
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
...
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
Snowchain
TS
--1
2
MTH
QTY
S
DES
T FORECAST
------- ---- ------- -------- ------- -------2011-01 79 3.3824 23.356 32.163 108.788
2011-02 133 4.8771 27.270 33.096 161.414
34
35
36
37
38
2013-10
1 .0774 12.914 62.976
4.876
2013-11 73 .3435 212.518 63.910 21.953
2013-12 160 2.5094 63.760 64.844 162.718
2014-01
3.3824
65.777 222.487
2014-02
4.8771
66.711 325.357
43
44
45
46
47
48
2014-07
2014-08
2014-09
2014-10
2014-11
2014-12
.0000
.0097
.0125
.0774
.3435
2.5094
71.380
.000
72.314
.700
73.247
.918
74.181
5.744
75.115 25.802
76.049 190.836
And I spot that model might not be perfect but definitely close:
ITEM
---------Snowchain
...
Snowchain
...
Snowchain
Snowchain
...
Snowchain
Snowchain
Snowchain
MTH
QTY FORECAST QTY_YR FC_YR
------- ---- -------- ------ ------2011-01 79 108.788
331 421.70
2012-01 148 146.687
582 556.14
691 690.57
691 690.57
2013-11 73 21.953
2013-12 160 162.718
2014-01
222.487
691 690.57
691 690.57
825.00
2014-02
325.357
825.00
2011-01
2.885
377 390.59
2012-01
2.418
321 322.75
2013-01
1.951
263 254.91
2013-05
2013-06
2013-07
23
46
73
23.645
54.579
51.299
263 254.91
263 254.91
263 254.91
2013-12
2014-01
3.764
1.484
263 254.91
187.07
else Forecast
end type
, sum(
case
when ts <= 36 then qty
else round(t * s)
end
) over (
partition by item, extract(year from mth)
) qty_yr
from s5
order by item, ts;
MTH
QTY TYPE
QTY_YR
------- ---- -------- -----2011-01 79 Actual
331
2012-12 172 Actual
2013-01 167 Actual
582
691
691
825
825
2014-11 26 Forecast
2014-12 191 Forecast
2011-01
4 Actual
825
825
377
2012-12
2013-01
3 Actual
2 Actual
321
263
2013-12
2014-01
2014-02
5 Actual
1 Forecast
7 Forecast
263
187
187
2014-11
2014-12
3 Forecast
3 Forecast
187
187
many items. But she also wanted the results put into a table so that the
forecast data could be used and reused for graphs and application and all
sorts of things for the purchasing department.
No problem, said I, Ill create the table:
create table forecast (
item varchar2(10)
, mth date
, qty number
);
And then with one insert statement, I created the 2014 forecast of hundred thousand items in 1 minute:
insert into forecast
with s1 as (...), s2 as (...), s3 as (...), s4 as (...), s5 as (...)
select item
, mth
, t * s qty -- forecast
from s5
where s5.ts >= 37; -- just 2014 data
Conclusion
The original Time Series Analysis model was here built in Excel with a
series of successive columns with formulas. Using successive with clauses
and analytic functions to calculate centered moving average and regression, I could step by step transform the same model into a SQL statement
Do you want to share your Oracle story to the world? Please fill in our
Call-for-Content or contact Editor-in-Chief Douwe Pieter van den Bos.
OTech Magazine
OTech Magazine is an independent magazine for Oracle professionals. OTech Magazines goal is to offer a clear perspective on Oracle
technologies and the way they are put into action. OTech Magazine
publishes news stories, credible rumors and how-tos covering a
variety of topics. As a trusted technology magazine, OTech Magazine
provides opinion and analysis on the news in addition to the facts.
OTech Magazine is a trusted source for news, information and
analysis about Oracle and its products. Our readership is made up of
professionals who work with Oracle and Oracle related technologies
on a daily basis, in addition we cover topics relevant to niches like
software architects, developers, designers and others.
OTech Magazines writers are considered the top of the Oracle
professionals in the world. Only selected and high-quality articles
will make the magazine. Our editors are trusted worldwide for their
knowledge in the Oracle field.
OTech Magazine will be published four times a year, every season
once. In the fast, internet driven world its hard to keep track of
whats important and whats not. OTech Magazine will help the
Oracle professional keep focus.
OTech Magazine will always be available free of charge. Therefore
the digital edition of the magazine will be published on the web.
OTech Magazine is an initiative of Douwe Pieter van den Bos. Please
note our terms and our privacy policy at www.otechmag.com.
Independence
OTech Magazine is an independent magazine. We are not affiliated, associated, authorized, endorsed by, or in any way officially
connected with The Oracle Corporation or any of its subsidiaries or
its affiliates. The official Oracle web site is available at www.oracle.
com. All Oracle software, logos etc. are registered trademarks of the
Oracle Corporation. All other company and product names are trademarks or registered trademarks of their respective companies.
Authors
Advertisement
Intellectual Property
All content is the sole responsibility of the authors. This includes all
text and images. Although OTech Magazine does its best to prevent
copyright violations, we cannot be held responsible for infringement
of any rights whatsoever. The opinions stated by authors are their
own and cannot be related in any way to OTech Magazine.
OTech Magazine and otechmag.com could contain technical inaccuracies or typographical errors. Also, illustrations contained herein
may show prototype equipment. Your system configuration may differ slightly. The website and magazine contains small programs and
code samples that are furnished as simple examples to provide an
illustration. These examples have not been thoroughly tested under
all conditions. otechmag.com, therefore, cannot guarantee or imply
reliability, serviceability or function of these programs and code samples. All programs and code samples contained herein are provided
to you AS IS. IMPLIED WARRANTIES OF MERCHANTABILITY, NONINFRINGEMENT AND FITNESS FOR A PARTICULAR PURPOSE ARE
EXPRESSLY DISCLAIMED.
OTECH MAGAZINE