Archive

Author Archive

Oracle Cloud Infrastructure (OCI) Powershell for finding out public IP addresses of instances in all compartments

OCI Powershell Cmdlets work at least in Powershell 6.x. Start Powershell in admin mode and type the command below. If you don’t want Cmdlets to be installed to all users on the machine, there is an option to install it just for you. Installation might take some time, the package is quite large.

Install-Module -Name OCI.PSModules -Scope AllUsers

You need to have .oci folder and other prerequisites in place, like API key, tenantId etc. When you generate the API key in user management, you get a script template for the config file.

Actual script

This script is an example of how to fetch stuff from all compartments, the public IP address was just something I needed. I first thought of putting the compartment name in the array, but had some trouble getting this to work so forgot to implement that.

# Import the OCI module if it's not already imported
Import-Module OCI.PSModules

# Initialize variables
$rootCompartmentId = "<root compartment OCID>"

# List all compartments under the root compartment
$compartments = Get-OCIIdentityCompartmentsList -CompartmentId $rootCompartmentId

# Create an ArrayList to hold all compartments, including the root
$allCompartments = New-Object 'System.Collections.ArrayList'

# Add root compartment
[void]$allCompartments.Add([PSCustomObject]@{Id = $rootCompartmentId; Name = "Root Compartment"})

# Define a recursive function to get compartments
function GetCompartmentsRecursively {
    param (
        [string]$compartmentId
    )

    # Fetch compartments directly under the provided compartment ID
    $directCompartments = Get-OCIIdentityCompartmentsList -CompartmentId $compartmentId

    # Loop through each direct compartment
    foreach ($compartment in $directCompartments.id) {
                                                                                       # For debugging
# Write-Host "compartment: $compartment"
                                 # Add the compartment to the ArrayList
        #[void]
                                                          $allCompartments.Add([PSCustomObject]@{Id = $compartment})
                                                                                       # For debugging                                                                                       # Write-Host "allcompartments: $($allcompartments|out-string)"
        # Recursive call to get nested compartments
        GetCompartmentsRecursively -compartmentId $compartment
    }
}

# Start the recursive fetching from the root compartment
GetCompartmentsRecursively -compartmentId $rootCompartmentId

# Enumerate through each compartment in ArrayList
foreach ($compartment in $allcompartments.id) {
    $compartmentId = $compartment
    #Write-Host "compartmentid: $compartmentid"

    # List all instances in the current compartment
    $instances = Get-OCIComputeInstancesList -CompartmentId $compartmentId

    # Enumerate through each instance
    foreach ($instance in $instances) {
        #write-host "$($instance.displayname)"

        # Filter based on the instance name (you can change this condition)
        if ($instance.DisplayName -like "*<yourmachinenamepart>*") {
            write-host "$($instance.displayname)"
            $instanceId = $instance.Id
            #write-host "$($compartmentid)"
            #write-host "$($instanceid)"
            
            # List all VNIC attachments for the instance
            $vnicAttachments = Get-OCIComputeVnicAttachmentsList -CompartmentId $compartmentId -InstanceId $instanceId

           # Enumerate through each VNIC attachment
            foreach ($vnicAttachment in $vnicAttachments) {
                $vnicId = $vnicAttachment.VnicId

                # Get VNIC details to fetch the public IP
                $vnic = Get-OCIVirtualNetworkVnic -VnicId $vnicId

                # Print compartment name, instance name, and public IP
                Write-Host "Compartment: $($compartment.Name), Instance Name: $($instance.DisplayName), Public IP: $($vnic.PublicIp)"
            }
        }
    }
}
Categories: Oracle Tags: ,

Unified auditing for Oracle 19c

Unified auditing is Oracle’s newer way of auditing database activities. It should be faster than mixed mode or traditional auditing, but I don’t want to ponder on that one here. This is just a short version on how to take that into use in Windows.

Setting up

  • Stop all Oracle services
  • Rename %ORACLE_HOME%/bin/orauniaud12.dll.dbl to %ORACLE_HOME%/bin/orauniaud12.dll.
  • Start services

Verify

SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Unified Auditing';

If the option works as expected, it returns:

PARAMETER VALUE
---------------- ----------
Unified Auditing TRUE

After this you can create the actual policy and settings for it. Connect with a user with audit_admin role to the database. The following settings are based on https://cisecurity.org Oracle 19c benchmark.

CREATE AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY PRIVILEGES CREATE USER;
AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
CREATE USER;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
ALTER USER;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
DROP USER;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
CREATE ROLE;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
ALTER ROLE;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
DROP ROLE;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
GRANT;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
REVOKE;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
CREATE PROFILE;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
ALTER PROFILE;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
DROP PROFILE;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
CREATE DATABASE LINK;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
ALTER DATABASE LINK;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
DROP DATABASE LINK;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
CREATE SYNONYM;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
ALTER SYNONYM;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
DROP SYNONYM;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
PRIVILEGES
SELECT ANY DICTIONARY;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
ALL on AUDSYS.AUD$UNIFIED;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
CREATE PROCEDURE,
CREATE FUNCTION,
CREATE PACKAGE,
CREATE PACKAGE BODY;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
ALTER PROCEDURE,
ALTER FUNCTION,
ALTER PACKAGE,
ALTER PACKAGE BODY;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
DROP PROCEDURE,
DROP FUNCTION,
DROP PACKAGE,
DROP PACKAGE BODY;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
ALTER SYSTEM;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
CREATE TRIGGER;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
ALTER TRIGGER;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
DROP TRIGGER;
ALTER AUDIT POLICY CIS_UNIFIED_AUDIT_POLICY
ADD
ACTIONS
LOGON,
LOGOFF;

Cleaning up logs

Set timestamp for archived logs in some script, e.g. after running backup script to mark that you have backed them up. I recommend to keep the archive time e.g. 30 days in the past as below to make sure all audit logs are backed up, in case backup malfunctions.

BEGIN
DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP
AUDIT_TRAIL_TYPE     =>  DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED,
LAST_ARCHIVE_TIME    =>  SYSDATE-30);
END;
/

Create a purge job for audit logs. Scheduler takes care of the running.

BEGIN
DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (
  AUDIT_TRAIL_TYPE            => DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED,
  AUDIT_TRAIL_PURGE_INTERVAL  => 24,
  AUDIT_TRAIL_PURGE_NAME      => 'Audit_Trail_PJ',
  USE_LAST_ARCH_TIMESTAMP     => TRUE);
END;
/

Sources

https://docs.oracle.com/en/database/oracle/oracle-database/19/dbseg/administering-the-audit-trail.html#GUID-91B17A5D-A702-4C9D-92C4-CF62A4BC4A5A

https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/recommended-and-best-practices-complete-upgrading-oracle-database.html#GUID-21FE7F97-DE79-43D4-A8DD-D66035C17608

Oracle DBMS_LDAP and LDAPS (SSL/TLS encrypted LDAP)

We were in need of connecting from Oracle to Microsoft AD’s SSL/TLS encrypted LDAP port, tcp/636, to facilitate connections without exposing passwords to network. There weren’t much good instructions so I had to gather the info from multiple sources. I just wanted to show the process here for simplicity.

The process

  • Check what certificate AD server is using. If it is commercial, install commercial vendor’s root-CA and intermediate-CA certs to Oracle wallet on the server as Trusted certs. If self-signed, install self-signed public key part from CA and intermediate CA to Oracle wallet as trusted cert. Oracle has no other means of knowing or checking if the certs it sees are trustworthy. I don’t believe it checks for CRL’s either. I used Oracle Wallet Manager, orapki will do the stuff too from command line. If you can browse to a site using the cert you require, you can store the cert from the browser in .cer format for the Wallet.
  • If you want to do mutual authentication, you have to import also user certificate for the server to authenticate itself with. A single server cert works just fine and it can be self-signed, if the AD server trust the Certificate Authority. According to Digicert, Oracle doesn’t approve of wildcard certs, you have to request a duplicate without that property for the server name. In this case, generate a CSR from Wallet manager or orapki and install the received cert as User certificate for that CSR.
  • Add access to LDAP server via network ACL. Use resolvable name and LDAPS port tcp/636. Regular LDAP port is tcp/389, but it is for plain text version.
BEGIN
-- this requires that you already have an ACL created
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (acl => 'my_net_perms.xml', 
  host => 'myldapserver.domain.com', lower_port => 636, 
  upper_port => 636);
END;
SET SERVEROUTPUT ON
DECLARE
-- Adjust as necessary.
l_ldap_host VARCHAR2(256) := 'myldapserver.domain.com';
--SSL
l_ldap_port VARCHAR2(256) := '636';
--no SSL
-- l_ldap_port VARCHAR2(256) := '389';
l_ldap_user VARCHAR2(256) := 'domain\username';
l_ldap_passwd VARCHAR2(256) := 'password';

l_retval PLS_INTEGER; 
l_session DBMS_LDAP.session;

begin
-- allow exceptions
dbms_ldap.use_exception := true;
-- connect to host
l_session := DBMS_LDAP.init(hostname => l_ldap_host, portnum => l_ldap_port);
-- change the connection to SSL/TLS
-- sslauth needs to be 2 for the one-way verification to happen, 3 for two-way
l_retval := DBMS_LDAP.open_ssl (ld => l_session, sslwrl =>
  'file:drive:\path\wallet\', sslwalletpasswd => 'walletpwd',
  sslauth => 2); 
-- bind to session
l_retval := DBMS_LDAP.simple_bind_s(ld => l_session, dn => l_ldap_user, 
  passwd => l_ldap_passwd);
end;

Originally it seemed via Wireshark that Oracle supports only a handful of ciphers in DBMS_LDAP, but in the end it changed nicely to a modern cipher, at least in version 12.2 database.

Sources and help

Categories: Oracle Tags: , , , ,

Oracle Native Network Encryption in Standard Edition

Oracle has changed the licensing terms for Native Network Encryption (NNE) in Standard Edition. Earlier the requirements were something like EE and Advanced Security Option (ASO), but currently the licensing agreement states that NNE can be used in SE starting from version 11.2.0.4.0, see Doc ID: 1930738.1 in support.oracle.com.

We verified this by adding the following lines to sqlnet.ora on a server and capturing the traffic with Wireshark.

SQLNET.ENCRYPTION_SERVER=REQUIRED
SQLNET.ENCRYPTION_TYPES_SERVER=AES128

The required Oracle version apply to client as well, including full client, instant client and ODP.NET managed client. Managed client older version might require manual removal before the new one installs correctly.

Also there might be TNS-12657 errors after enabling encryption, that can be fixed by changing/importing one DLL from EE version as per Doc ID: 1930944.1.

Categories: Oracle Tags: , ,

Oracle DBSAT usage

I tried using Oracle database security assessment tool (DBSAT) and was rather pleased with the results and approach to checking db security. Haven’t had time to compare what is missing from what I have earlier checked manually, but seemed quite comprehensive.

There are three separate parts in running the tool: collecting info, reporting from it and discovering PII.

Some prerequisites were mentioned in the docs, but I had some small problems during the test run, so here are the prerequisites I felt were missing.

  • Full oracle install with the zip and unzip utilities, don’t know if they come with only client. Easier to run on server, but requires more programs to be installed in reporting and discovering.
  • ORACLE_HOME must be defined.
  • Python (mentioned for reporter) – had to use the executable installer-version, since embedded didn’t seem to have all the libraries and didn’t have time to find them. Oracle could possibly include a working version with their tool.
  • Path should contain Python dir.
  • Java runtime-environment (mentioned for discover) – requires JAVA_HOME to be defined up until the C:\Program Files\Java\jre1.8.0_162.

What I need now, is the sensitive pattern list for localized data..

 

Categories: Uncategorized Tags: , ,

Strawberry Perl, DBD::Oracle module and Oracle Instant Client 12.2

I’ve been quiet for a long time, but now I ran into something I thought might be worth writing.

Recently I had to update 64-bit Strawberry Perl with newest DBD::Oracle module and newest Oracle Instant Client 12.2. The process is actually quite simple, but compiling and running the DBD::Oracle has some prerequisites which took me some time to understand. So to ease someone else’s pain..

Download and unpack Oracle 64-bit Instant client packages to one directory, e.g. C:\oracle_ic_12.2. The required client packages are:

Basic

Sqlplus

SDK

Keep the stuff in directories present in the archives. Add the ic path to your path environment variable. Also add ORACLE_HOME variable and point it to the same ic directory.

These are enough to satisfy DBD::Oracle make process with cpan:

cpan -m DBD::Oracle

The actual usage on Windows requires you to install Visual Studio runtime 2013, x64-version. You can find it probably in https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads

You can easily test the DBD::Oracle connection with the following script:

use DBI;

$dbh=DBI->connect(“dbi:Oracle://youroracleserver:1521/service.name”,”scott”,”tiger”);

If it complains about wrong username or password, it established the connection successfully. If it logs in successfully, you might want to remove the sample schemas.. 🙂

Categories: Oracle Tags: , , ,

Duplicating databases with RMAN through a share

Duplicating Oracle databases with RMAN on Windows 2003/2008 is quite simple to do by following Oracle’s documentation. There are however a couple of catches if you don’t have the flash recovery area backups available through the same path as in target database. This is true especially in Oracle 10g2, while 11g2 has some new options to avoid pitfalls available in earlier versions. To clarify, in Oracle’s terms target database is the source database while auxiliary database is the actual duplication target.

First of all, there is a good article regarding the problem arising in database duplication through a share in Windows. Its solution is included in the following list, which contains all the workarounds I stumbled upon and invented with colleague P. Kaila while we needed to duplicate a clustered database where disks containing flash recovery area were on the other node.

The simplified situation was this:

– Node 1 had disks G: and H: for instance DUPDB.

– Node 2 had disks E: and F: for instance SRCDB, where F: contained Flash Recovery Area (FRA), hence the backups.

– Oracle 10.2 needs a connection to the target (source) database for the duplication, while 11.2 doesn’t.

– Oracle 10.2 does not have an option in RMAN DUPLICATE command to specify backup location, hence the next limitation applies.

– Oracle 10.2 needs the backups available in exactly the same source path as the target database, i.e. F:\ORACLE\FRA\. As it happens, it had only disks G: and H:. Mapping \\node2\F$ to F: was possible, but does not work due to Oracle trying to access it with SYSTEM account. It just causes the following error stack:

ORA-19870: error reading backup piece …

ORA-19505: failed to identify file …

ORA-27041: unable to open file

OSD-04002: unable to open file

O/S-error: (OS 3) The system cannot find the path specified.

After some research we settled with the following workaround:

– Share node2 F$ as ftemp and give node1’s machine account full permissions to it.

– Give node1’s machine account full NTFS permission on the node2’s f$ root and directories below.

– Create the instance as instructed in the Oracle documentation for db duplication:

— Create service and start it with oradim.

oradim –new –sid dupdb –startmode manual –spfile

oradim –edt –sid dupdb –startmode auto –srvcstart system

— Create pwfile with orapwd.

orapwd file=pwddupdb.ora password=… force=n

— Configure tnsnames.ora and listener.ora against new instance, check connectivity to both source and duplicate databases with tnsping.

— Create init.ora in standard location ORACLE_HOME\database. You can copy the init.ora from source database and change all the necessary parameters. To make the initialization parameters work on the duplicate database with different directory structure, make sure to include and change the following parameters:

control_files

db_file_name_convert=’F:\ORADATA\SRCDB’,’G:\ORADATA\DUPDB’,’…

log_file_name_convert=’F:\ORADATA\SRCDB’.’G:\ORADATA\DUPDB’,’…

# You can also use other renaming options available.

db_name=dupdb #to change the name of the database

*_dest –parameters should all reflect the new directory structure.

— Sqlplus: Convert the init.ora to spfile in the dupdb host.

connect sys/@dupdb as sysdba;

create spfile from pfile;

— Start the auxiliary i.e. duplicate database in the same sqlplus with the newly generated spfile.

startup force nomount;

– Create a duplicate.cmd script on node1 as follows:

net use F: \\node2\ftemp

rman target sys/@srcdb auxiliary sys/@dupdb cmdfile=duplicate.ini log=c:\temp\duplicate.log

net use F: /delete

where duplicate.ini contains the following run block:

run

{

duplicate target database to dupdb;

}

– Schedule the duplicate.cmd script with command line scheduled tasks tool to create a task running under SYSTEM account.

schtasks /create /RU "SYSTEM" /SC ONCE /TN "Duplicate db" /TR "c:\temp\duplicate.cmd" /ST <<current time>>

(I didn’t have the patience to find out why schtasks needed the actual time to run the job, since it should default to current time.)

– Wait for the task to run and the RMAN duplicates the database just fine. 🙂

– Reset the file name conversion parameters in dupdb sqlplus, if no need for reduplication:

alter system reset db_file_name_convert scope=spfile sid=’*’;

alter system reset log_file_name_convert scope=spfile sid=’*’;

– As a finishing touch, restart the dupdb.

shutdown immediate;

startup;

And you’re done. I love RMAN duplication! 🙂

 

Other options may apply if the environment allows for it.

a) On 11g2 you can use DUPLICATE … BACKUP LOCATION ‘path’ … –option<<>> to avoid share and machine account hassle.

b) The approach suggested in the earlier article was to use UNC path to create backup. This means that db_recovery_file_dest is set to \\SERVER\SHARE which causes the DUPLICATE command to look for the backup in that same path. I haven’t tested if it needs any additional NTFS permissions to succeed. We didn’t want to touch the source database settings.

c) Start OracleServiceSID in dupdb host with domain account able to access FRA in source machine and connect the F$ to F: in the dupdb host.

d) If possible, create a local disk on dupdb host with drive letter F: and copy the FRA from the source machine to that drive.

Reduplication

When reduplicating, there are some things to consider.

– Perform online backup at the source database before starting to include all changes.

– You can include the *_convert parameters in the spfile to keep the new names correct. There is no need to reset them. There are also other possibilities for renaming the files on the fly, including RMAN commands. E.g. aux filenames are apparently stored in dupdb control file to make the changes to the paths every time the db is duplicated.

– To enable reduplication, sqlplus this:

shutdown immediate;

startup nomount;

exit;

The exit command is essential, because without it the reduplication fails in the end.

– When you resynchronize with Oracle 10.2, there is a bug which requires tempfiles to be recreated and you absolutely have to be logged out of the database while duplicating. This can be fixed with something like:

create temporary tablespace TEMP2 tempfile ‘g:\oracle\oradata\<ORACLE_SID>\temp02.dbf’ size 2G;

alter database default temporary tablespace TEMP2;

drop tablespace TEMP;

create temporary tablespace TEMP tempfile ‘g:\oracle\oradata\<ORACLE_SID>\temp01.dbf’ SIZE 4G reuse AUTOEXTEND ON NEXT 640K MAXSIZE 10G;

alter database default temporary tablespace TEMP;

drop tablespace temp2;

– Remember that you have to change the passwords for the users, if you don’t want the same passwords for the source and duplicate db.

– Remove from archivelog mode if that is not needed.

Using OS credentials with DIRECTORY objects

Someone needed access to DIRECTORY objects located behind UNC path. There are actually two cases, first where machines are located in the same domain or domains having some sort of trust relationship and second, where machines are in non-trusting domains. Obviously the first case is a bit easier and presented here first. I will publish the second case later, after having time to finish it off.

Kudos to E. Soini and P. Kaila for initial researching and testing out the “same domain” case.

 

Machines in the same domain

 

Let’s assume a couple of things. We already have a directory on some remote server and it is shared. It could be referenced with an UNC path  \\remoteserver\remoteshare. Our server is named myserver. There is a file called readthis.txt on the remote server.

First we need to grant some rights via NTFS ACL’s to remote directory, for example, read and write permissions. The permissions should be granted to MYSERVER Computer object via OS GUI or command line tools and I’m not going to go through it here.

After that we need to grant permissions to a share to the same computer object, read and write permissions again. Again with OS GUI or command line tools.

After this we create the directory objects in Oracle as a user having CREATE ANY DIRECTORY privileges and grant some permissions on it, if we are not running as a user having full privileges to all objects, which should be the case.

CREATE OR REPLACE DIRECTORY remotedir AS '\\remoteserver\remoteshare';
CREATE OR REPLACE DIRECTORY localdir AS 'C:\temp';
GRANT READ ON DIRECTORY remotedir TO user;
GRANT WRITE ON DIRECTORY remotedir TO user;
GRANT READ ON DIRECTORY localdir TO user;
GRANT WRITE ON DIRECTORY localdir TO user;

After this we can copy an example file with UTL_FILE package, as follows. One might try to use DBMS_FILE_TRANSFER which is apparently only suitable for files being multiples of 512 bytes.

BEGIN
UTL_FILE.FCOPY('REMOTEDIR','test1.txt', 'LOCALDIR', 'test2.txt');
END;

What this means is that if you want to use remote UNC paths with, for example, datapump jobs, you can still schedule the task under LocalSystem (Older win: SYSTEM) account, since the authentication is based on machine account. Somewhat easier than changing Oracle to run under some domain user.

Categories: Oracle Tags: , , ,

Revoking public access to sys packages

Many Oracle security best practice lists suggest revoking execute rights from PUBLIC to packages owned by SYS. Even better would be revoking all public access and granting only the needed packages and objects, no matter who owns them. Both of these are somewhat complicated to achieve, since execute privilege has been granted to PUBLIC for numerous packages and blindly revoking them could be disastrous. I try to give a way to enumerate the accounts using these packages through PUBLIC rights to make it possible to grant explicit privileges instead of implicit ones and then revoking PUBLIC access. As a side note, I have done these tests with Standard Edition, which is conveniently at my disposal.

Checking what access PUBLIC has received after install, gives a count of 27314 objects. In my opinion, it would be nice if Oracle would change these default permissions to follow least privilege principles and instead include a script for each of their own features requiring additional privileges.

select count(*) from dba_tab_privs where grantee = 'PUBLIC';

Narrowing this down to packages owned by SYS gives only 203 objects. The following script is from SANS SCORE v3.1 check list.

(select object_name from dba_objects where owner ='SYS' and 
object_type='PACKAGE')
intersect
(select table_name from dba_tab_privs where grantee='PUBLIC' and 
privilege='EXECUTE');

The number of packages is not as big anymore, but the problem rises from the fact that many features and your own applications might use these packages through the implicit permissions given by PUBLIC. To narrow down the possibility of removing some permission without giving explicit permission to the user, the usage of the objects can be audited. Oracle SE doesn’t have Fine Grained Auditing (FGA) but it still has normal session and access based audit options. Enabling auditing requires some preparation, as can be seen from the manual. Initialization parameter audit_trail needs to be set to some value other than none, which is the default. I chose the value db to keep the audit records inside the same database SYS.AUD$ table. Another way of finding out public access would be querying all_source, but that doesn’t give away access from inside the precompiled packages and some types, like HTTPURITYPE, use these packages, even though that is not so obvious.

alter system set audit_trail = db scope = spfile;

After this, a restart of the database is required for the parameter to become active. The audit records can be queried from the table dba_audit_trail and there might already be some rows if the parameter has been set earlier. After enabling this parameter, some mandatory audit events like SYSDBA logins are logged, so the size of the table will start to grow slowly. I will discuss purging old logs later in this post.

Now I have to define what I want to audit, which consists of all packages granted to PUBLIC and owned by SYS. The following script gives a script to be run to start the auditing. The access by SYS itself is not audited with these clauses, since audit_sys_trail parameter is not set.

select 'audit execute on sys.'||p.object_name||' by access 
whenever successful;' 
from ((select object_name from dba_objects where owner ='SYS' and 
object_type='PACKAGE')
intersect
(select table_name from dba_tab_privs where grantee='PUBLIC' and 
privilege='EXECUTE')) p

The clauses given should be something like the following.

audit execute on sys.DBMS_ADDM by access whenever successful;
audit execute on sys.DBMS_ADVISOR by access whenever successful;
audit execute on sys.DBMS_APPLICATION_INFO by access whenever successful;
audit execute on sys.DBMS_AQJMS by access whenever successful;
...

After this dba_audit_trail becomes quickly populated if the packages are accessed. You can verify this by querying.

select * from dba_audit_trail order by timestamp desc;

If you want to see only the execution of the packages granted to public, the following will help. You could always assume that other packages are not audited, but if that is not the case, this will use the previously given exclusion.

(select obj_name from dba_audit_trail where 
action_name = 'EXECUTE PROCEDURE')
intersect
((select object_name from dba_objects where owner ='SYS' and 
object_type='PACKAGE')
intersect
(select table_name from dba_tab_privs where grantee='PUBLIC' 
and privilege='EXECUTE'))

Now becomes the hard part. When can you be certain that every aspect and feature in your applications is tested? Or in Oracle’s own features? It is very probable that you can never be completely certain, but given enough time – like several months – you can be pretty sure that most of the functionality has been used at least once. Or if there are some operations done only once a year, wait for them being run.

A warning has to be given that auditing generates a lot of data, especially in transaction intensive applications. Be sure to monitor the growth of the SYSTEM tablespace, where the AUD$ table resides. This brings us to the part where we have to configure a cleaning process for the audit records. If you want to verify that there are no previous cleanup processes initialized, please check the manual of auditing. The initialization needs to be performed only once, but there is a catch in the cleanup procedure: it moves the AUD$ table to SYSAUX tablespace, if it’s not already manually moved. So you need to change the target of monitoring for growth. The last procedure creates a scheduler job to purge the audit trail rows created before archive timestamp and it runs every 12 hours based on purge interval below. These are modified from Oracle manuals to reflect my case.

BEGIN
 DBMS_AUDIT_MGMT.INIT_CLEANUP(
  AUDIT_TRAIL_TYPE            => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
  DEFAULT_CLEANUP_INTERVAL    => 12 );
END;
/
BEGIN
  DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(
   AUDIT_TRAIL_TYPE     =>  DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
   LAST_ARCHIVE_TIME    =>  to_timestamp('2011-08-01 00:00:00',
'YYYY-MM-DD HH24:MI:SS.FF')
   );
END;
/
BEGIN
  DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (
   AUDIT_TRAIL_TYPE            => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD, 
   AUDIT_TRAIL_PURGE_INTERVAL  => 12,
   AUDIT_TRAIL_PURGE_NAME      => 'Standard_Audit_Trail_PJ',
   USE_LAST_ARCH_TIMESTAMP     => TRUE );
END;
/

Bear in mind that these procedures actually delete the audit data gathered. So take a note of the results before purging them.

I needed to increase the amount of rows deleted during one run of cleanup, since otherwise the cleanup procedure doesn’t delete enough rows.

BEGIN
 DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_PROPERTY(
  AUDIT_TRAIL_TYPE            => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
  AUDIT_TRAIL_PROPERTY        => DBMS_AUDIT_MGMT.DB_DELETE_BATCH_SIZE,
  AUDIT_TRAIL_PROPERTY_VALUE  => 200000);
END;
/

Now you can query the scheduled jobs from dictionary views.

SELECT * FROM DBA_AUDIT_MGMT_CLEANUP_JOBS;
SELECT * FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'STANDARD_AUDIT_TRAIL_PJ';

Now the only problem is that the archive timestamp should be increased every time you have processed all gathered logs to find out what is using the PUBLIC SYS packages.

Revoking access

When looking through DBA_AUDIT_TRAIL, you can see entries like:

OS_USERNAME	USERNAME ...	OWNER	OBJ_NAME
MYDOM\MYUSER	MYORCLUSER ...	SYS	DBMS_UTILITY

This and similar entries give you the information needed for specific grants. After compiling a list of the needed permissions on objects owned by SYS, you can grant them and revoke the permissions from the public. Note: be careful and do this in small groups of packages, since most of the Oracle functionality uses these permissions also and you need to be sure that basic things like export and import keep on working.

GRANT EXECUTE ON SYS.DBMS_UTILITY TO MYORCLUSER;
REVOKE EXECUTE ON DBMS_UTILITY FROM PUBLIC;

We found several dependencies which are listed below, thanks to P. Kaila and E. Soini. There are many more, but just to give an example.

--System needs these for exports
GRANT EXECUTE ON DBMS_JBM_EXP_PERMS TO SYSTEM;
GRANT EXECUTE ON UTL_FILE TO SYSTEM;
GRANT EXECUTE ON DBMS_JAVA TO SYSTEM;
GRANT EXECUTE ON DBMS_JAVA_TEST TO SYSTEM;
--Sysman needs these for em/db control
GRANT EXECUTE ON UTL_RAW TO SYSMAN;
--Mdsys
GRANT EXECUTE ON DBMS_LOB TO MDSYS;
-- Ordplugins
GRANT EXECUTE ON UTL_RAW TO ORDPLUGINS;
-- Xdb
GRANT EXECUTE ON DBMS_LOB TO XDB;
GRANT EXECUTE ON UTL_FILE TO XDB;
GRANT EXECUTE ON UTL_RAW TO XDB;
-- Ctx_sys
GRANT EXECUTE ON DBMS_LOB TO CTXSYS;
GRANT EXECUTE ON UTL_RAW TO CTXSYS;
-- Dbsnmp
GRANT EXECUTE ON UTL_RAW TO DBSNMP;
-- Exfsys
GRANT EXECUTE ON DBMS_LOB TO EXFSYS;

That’s it for this time.

Categories: Oracle Tags: , , , , ,

Statspack scheduling through dbms_scheduler

Oracle pushes only the Enterprise Edition (EE) + Diagnostics pack Automatic Workload Repository (AWR) with 11g2 for statistics reporting, which doesn’t help Standard Edition (SE) or lower users. We are still forced to use (luckily) existing statspack package for performance tuning and reporting.

I don’t like redundant information, so this won’t be another post on installing and using Oracle statspack. There are plenty of other good articles about that, like for example Akadia’s article and Oracle 9.2 Performance tuning documentation, chapter 21. Instead, I’m going to tell you how I scheduled statspack scripts to be run automatically via dbms_scheduler.

Oracle 11g2 contains statspack, but its documentation has diminished from the previous versions. The required scripts are in the %ORACLE_HOME%\rdbms\admin folder, with names beginning with “sp”. The folder also holds a document called spdoc.txt, which is nice, since the rest of the documentation has been removed from the Performance tuning guide. Now the Oracle documentation gives away the presence of this file only in the “Streams concepts and administration” manual which doesn’t feel like a logical place to me.

Spdoc.txt and Oracle 9.2 documentation list all the scripts and their purpose, with the exception that spdoc documents also updates made to the scripts up until version 11.1. Since the old spauto.sql contains directions to use the dbms_job, I converted it to do the same thing through dbms_scheduler to be more modern. We have also run into problems with jobs scheduled in time zones using DST, causing excessive errors in alert.log. So in essential, this article is more about using scheduler than statspack. 🙂

After the installation of statspack, the procedure is simple. First I need to grant as sys the option to create scheduler jobs to perfstat user.

sqlplus / as sysdba
grant create job to perfstat;

Next I need to login as perfstat and create the actual job. I want two snaps with a time difference of 10 minutes, three times a day. This gives me some coverage throughout the day, but I suspect I will have to modify this schedule later on, when I have collected some data to be analyzed. So this schedule should be considered just as an example.

sqlplus /nolog
connect perfstat/pw;
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'snap_1',
job_type => 'STORED_PROCEDURE',
job_action => 'statspack.snap',
repeat_interval => 'FREQ=HOURLY; BYHOUR=9,12,15; BYMINUTE=5,15',
auto_drop => FALSE,
enabled => TRUE,
comments => 'Statspack automated snap');
END;

This creates a scheduler job with current time being the start_date and the actual execution is performed at times given by the repeat_interval at 9:05, 9:15, 12:05, 12:15, 15:05 and 15:15. The job is enabled at the same time. Scheduler is very versatile, so there are additional options for job classes, groups, schedules, windows etc., but these options are sufficient for my purpose.

I also needed another job purging old snapshots from the db to avoid growing SYSAUX uncontrollably. It was a bit disappointing to notice that I couldn’t use a simple job task for purging old snaps, since statspack.purge is overloaded and normal job cannot distinguish which version I’m trying to use from the job just by giving argument numbers. To define arguments by name, I tried using a program which in turn requires additional steps to define arguments, job and enabling both. Problem was that even with this approach Oracle couldn’t distinguish which version I tried to access or had type mismatch problems.

There was also a possibility to use anonymous PL/SQL block instead of a stored procedure, but in my opinion, that would make altering the parameters more difficult by making it mandatory to redefine the job again, instead of just setting an argument for it. Oh, and I misunderstood one part in the documentation and also wanted to do something in PL/SQL. 🙂

In the end I wrote a small wrapper which can be later customized to access different purge versions with  completely separate names. This made it possible to revert back to simple jobs, as can be seen below with the code for the wrapper. Other way of putting this is to write an anonymous block with the command, as stated in the article by dba-oracle.com, but this doesn’t have all the other functionality purge procedures have:

select * from stats$snapshot where snap_time < sysdate - 14;

Wrapper and job definition with arguments.

CREATE OR REPLACE
PACKAGE STATSPACK_WRAPPER AS 

  procedure PURGE
    ( i_num_days        IN  number
      , i_extended_purge  IN  boolean default FALSE
      , i_dbid            IN  number  default null
      , i_instance_number IN  number  default null
      );

END STATSPACK_WRAPPER;

create or replace
PACKAGE BODY STATSPACK_WRAPPER AS

  procedure PURGE
    ( i_num_days        IN  number
    , i_extended_purge  IN  boolean default FALSE
    , i_dbid            IN  number  default null
    , i_instance_number IN  number  default null
    ) IS
  BEGIN

    STATSPACK.PURGE
    ( i_num_days
    , i_extended_purge
    , i_dbid
    , i_instance_number
    );
  END PURGE;

END STATSPACK_WRAPPER;

BEGIN
  DBMS_SCHEDULER.CREATE_JOB (
   job_name           =>  'snap_purge_1',
   job_type           =>  'STORED_PROCEDURE',
   job_action         =>  'statspack_wrapper.purge',
   number_of_arguments => 1,
   repeat_interval    =>  'FREQ=DAILY; BYHOUR=19; BYMINUTE=00',
   auto_drop          =>   FALSE,
   comments           =>  'Statspack automated purge');
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
   job_name           => 'snap_purge_1',
   argument_position  => 1,
   argument_value     => '14');
DBMS_SCHEDULER.ENABLE('snap_purge_1');
END;

That’s it. Regular statspack-scripts like spreport.sql can be used to create reports and to get rid of the statspack-objects.

Edit: Had to change the wrapper since DBMS_SCHEDULER doesn’t approve boolean values as arguments to jobs. PLSQL_BLOCKs won’t  work either. Here’s the new wrapper:

CREATE OR REPLACE
PACKAGE STATSPACK_WRAPPER AS 

  PROCEDURE PURGE
	( i_num_days        IN  number
	, i_extended_purge  IN  varchar2 default 'FALSE'
	, i_dbid            IN  number  default null
	, i_instance_number IN  number  default null
	);

END STATSPACK_WRAPPER;
/
CREATE OR REPLACE
PACKAGE BODY STATSPACK_WRAPPER AS

  PROCEDURE PURGE
    ( i_num_days        IN  number
    , i_extended_purge  IN  varchar2 default 'FALSE'
    , i_dbid            IN  number  default null
    , i_instance_number IN  number  default null
    )
  IS
    i_extended_purge_conv BOOLEAN default FALSE;    
  BEGIN
    IF i_extended_purge = 'TRUE'
      THEN i_extended_purge_conv := TRUE;
    END IF;
    
    STATSPACK.PURGE
    ( i_num_days
    , i_extended_purge_conv
    , i_dbid
    , i_instance_number
    );
  END PURGE;
END STATSPACK_WRAPPER;
/

;

Categories: Oracle Tags: , , ,