Written JP Vijaykumar
Date Jan 4th 2020
Modified Jan 5th 2020
This article/script is provided for educational purpose ONLY.
The readers are advised to make necessary changes to the script as may be required for their use in their respective environments.
Pls test these scripts thoroughly in a lower environment, ONLY if you are satisfied with the tests' results, then ONLY use in any
higher environments.It is advisable to monitor the load on the db/server, while you are testing this script in your environment.

This script was generated/tested in Oracle 12C environment.
This script may not work in other versions of Oracle, pls modify the script as may be needed using your own discretion/ingenuity.

Pls be extremely careful, while using these scripts. If any of the files are deleted, due to some typos, there is no ROLLBACK for
unix commands that were executed wrongly.Pls excersize extreme caution while working with these unix commands. Pls test these script
thoroughly in lower environments.


Dedication towards your job is not a sacrifice. It is the justification for your existence in this world. Rabindranath Tagore

In this article, I will present, how to monitor the diskspace utilization on unix servers running Oracle dbs.
All our databases are running on Oracle 12C.

In my current project, I need to present capacity planning report for all the servers in the enterprise running Oracle databases.

In our environment, I have to monitor multiple Oracle dbs' diskspace space utilization from several servers, running linux OS.
For security reasons, passwordless remote logins were not allowed among various servers.
That makes things tougher for remotely monitoring the diskspace usage among all the servers.
At regular intervals, I have to gather information of diskspace utilization, available freespace and report the details to my leads.
Hopping across all servers and typing the password multiple times is a laborious process.

I had setup this process and just run the oneline shell command from my gitbash console and the diskspace report is ready.
The detailed steps are as follows:

On all the servers, I had included the following shell command, in our regularly run cronjobs, to capture the df -h command output
into a csv file.

$ cd /home/oracle/jp;>diskspace.csv;export HOST="`hostname`,`date +%Y%m%d`"; df -h |grep "%"|grep -v export|awk -v OFS=',' -v var="$HOST" '{if ($1 == "tmpfs" || $1 == "/dev/sda1" || $1 == "dr-oracle01p:/app" || $1 == "Filesystem" ) {print var,$2,$3,$4,$5,$6} else {print var,$1,$2,$3,$4,$5}}'>>diskspace.csv

Pls note, the above script is specific to our env, so as to filter unwanted mounts' information.
pls customize the command, with required filtering, as may be necessary/required to your specific environment.

contents of diskspace.csv file:

$ cat /home/oracle/jp/diskspace.csv

Or you can setup a separate shell script to execute the above command daily, and schedule it via cron.

connect as sysdba to the database.

SQL>create or replace directory mydir as '/home/oracle/jp';
SQL>grant read,write on directory mydir to public;

SQL>connect Veeksha/Saketh

SQL>drop table diskspace;

create table diskspace(
server_name char(20),
run_date char(10),
disk_size char(10),
used_size char(10),
free_size char(10),
used_pct char(10),
mount_name char(50)
) organization external
(type oracle_loader
default directory mydir
access parameters (
records delimited by newline
badfile 'diskspace.bad'
discardfile 'diskspace.dis'
logfile 'diskspace.log'
fields terminated by "," optionally enclosed by '"'
server_name char(20),
run_date char(10),
disk_size char(10),
used_size char(10),
free_size char(10),
used_pct char(10),
mount_name char(50)
location ('diskspace.csv')
reject limit unlimited

SQL>grant select on diskspace to public;

Once in a while, from my gitbash terminal, I run the following onliner and see the output.

$ >/c/temp/diskspace.txt;for SID in ABCPROD ABCPRODS ABCPRODR ABCDEV ABCPATCH ABCTEST ABCUAT EXEDEV XYZPROD XYZPRODS XYZPRODR XYZDEV SKYTEST;do date;sqlplus -s veeksha/saketh@$SID @c:/jpscripts/diskspace.sql |tee -a /c/temp/diskspace.txt;date;done;

I am using a common user to loop through different databases and read the contents of diskspace external table.
For this purpose, I had granted necessary permission in all the databases to the common user.

I can check a specific mount across all the server and see how the space is utilized:

$ cat /c/temp/diskspace.txt|grep -w "/db"

server_name ,db_name ,run_date ,alloc_gb ,used_gb ,free_gb ,%used ,mount
oracle01p ,ABCPROD ,20200104 ,5.8T ,5.4T ,395G ,94% ,/db
orclrpt01p ,ABCPRODS ,20200104 ,6.0T ,5.6T ,388G ,94% ,/db
dr-oracle01p ,ABCPRODR ,20200104 ,5.8T ,5.5T ,298G ,95% ,/db
oracle01d ,ABCDEV ,20191126 ,5.8T ,5.3T ,475G ,92% ,/db
oracle02t ,ABCPATCH ,20200104 ,5.5T ,5.0T ,498G ,92% ,/db
oracle01t ,ABCTEST ,20200104 ,5.7T ,5.3T ,405G ,94% ,/db
oracle04d ,ABCUAT ,20200104 ,5.8T ,5.3T ,544G ,91% ,/db
exeorcl01d ,EXEDEV ,20200104 ,5.5T ,5.1T ,402G ,93% ,/db
xyzorcl01p ,XYZPROD ,20200104 ,5.8T ,5.1T ,681G ,89% ,/db
xyzrpt01p ,XYZPRODS ,20200104 ,5.8T ,5.1T ,699G ,89% ,/db
dr-xyzorcl01p ,XYZPRODR ,20200104 ,6.0T ,4.8T ,1.3T ,79% ,/db
xyzorcl01d ,XYZDEV ,20200104 ,5.8T ,4.9T ,921G ,85% ,/db
skyorcl01t ,SKYTEST ,20200104 ,5.8T ,5.6T ,206G ,97% ,/db

The contents of diskspace.sql file are:

$ cat c:/jpscripts/diskspace.sql
set linesize 200 pagesize 100
select cast(server_name as varchar2(20)) server_name, cast(db_unique_name as varchar2(10)) name,cast(run_date as varchar2(10)) run_date,
cast(disk_size as varchar2(10)) disk_size, cast(used_size as varchar2(10)) used_size,
cast(free_size as varchar2(10)) free_size, cast(used_pct as varchar2(10)) used_pct,
cast(mount_name as varchar2(30)) mount_name from sys.diskspace,v$database;

01) I installed gitbash on my laptop, so that I can run unix scripts and loop through the list of ORACLE_SIDs.
02) Oracle client software should be installed on the laptop, to invoke sqlplus connectivity to the remote dbs.
03) This scripts works on the remote servers runing on unix flavours. This script does not work for servers running windows OS, as most of the commands are unix related.
04) We are using Red Hat Linux. "df -h" command may not work on all flavours of unix. If you are working on a different flavour or unix,
Pls use an equivalent command in your environment to capture the diskspace details.
Even if df command is working, the switch "-h" may not work equally in all flavours.
The readers are advised to make necessary changes as may be necessary in their environment.
05) In our environment, oracle user does not have necessary privileges to remotely login to all unix servers, due to security restrictions/requirements.
06) The scripts may fail/errors-out, when the network/remote server/listener/database is down/ or unreachable.
07) All our dbs are running on Oracle version 12c.
We are not using ASM DISKS in our databases.
This scripts may not work in other versions of Oracle.
Pls make necessary modifications to the script, as may be required in your environment.
08) There are other ways/methods to monitor diskspace on remote servers.
I prefer a shell script to loop through the list of servers, using passwordless ssh command to monitor diskspace.
09) Due to the restrictions/challenges/issues in our environment, I use this approach.
10) Make sure on each monitored server, the unix command to capture the diskspace usage of all mounts, is executed and the specified csv file is populated regularly.
11) Make sure the csv file is truncated during every run and re-populated with current data. I do not append data and I am not keeping historical data.
12) In our environment, on all our servers, we have setup cronjobs to monitor the diskspace usage and populate the required csv file.
13) At times, the mount may become full and csv file creation fails/or the task may fail due the file/directory permissions.
14) Troubleshoot failures of the script as the case may be, and make sure the report displays the current date or (sysdate -1) as the rundate in the report.
Else, the csv file is not current.
15) Most of the time, I browse through "/db" mount and "/archive" mounts from the consolidated local csv file grenerated on my laptop to see
how the space utilization is varying across the servers.
16) Make sure the user_name used to connect to all the dbs and read the external table data is having necessary permissions granted.
17) Make sure, all the required db names' tns entries are available in your local tnsnames.ora file.
18) There are other ways, to monitor the diskspace across all the servers, readers are advised to explore other options and choose the best suitable option for your environment.
This method is more convinient/suited to our environment in gathering the required info.
19) On each server, only one Oracle db is running, in out environment.
20) To combine the reports and consolidate, I use this formula:
a) projected_growth_of_db for next 6 months/12 months[ data obtainged from the tablespace capacity planning report, specified in the references section].
b) If available freespace in the db(inside tablespaces) > projected growth for next 6 months/ or 12 months(in step 01) - no action required.
Else - do we have sufficient freespace on the server to be added to the tablespaces as per growth requirements
if yes - no action by SAs to add additional diskspace on the server.
Else - submit necessary change tickets to SAs to procure additional diskapce and add to respective servers' mounts.
c) Iniate the process for adding required additional diskspace for the growth of the Oracle db, in time, based on the outcome from 20) b).
22) The diskspace capacity projections will be hugely impacted
-when new pluggable databases are created or dropped.
-When heavy data migration to and from a database happens.
-When heavy purging of historical data and re-org of affected tables/indexes.
-any other similar activity in the dbs.

Happy scripting.

http://www.orafaq.com/node/3139 --Tablespace Capacity Planning Report