SQL Server Administration Review

Planning and installing SQL Server

  • Planning
    • Operating System Requirements
      • SQL Server 2012 enterprise, business intelligence and web can only be installed on Windows server 2008 and higher OSs
      • SQL Server 2012 standard can be installed only on 2008 and higher Server OSs and enterprise/professional/business versions of client OSs
      • SQL Server 2012 developer tools can be installed on any windows OS (vista/ 2008 and higher)
    • An installation of SQL server on a core OS allows only
      • Database engine services, replication, Full-text search, analysis services, client tools connectivity and integration services server
    • Hard ware requirement
      • Processor
        • x64: Processor must suppport EM64T
        • x86: Pentium 3 or higher
      • Processor speed
        • x64: min 1.4 GHz, recommended 2.0 GHz
        • x86: min 1.0 GHz, recommended 2.0 GHz
      • Minimum RAM
        • min 1GB, recommended 2 GB
    • Database File management
      • Shrinking and growing databases
        • For fixed size databases, you increase their size using
        • Decreasing Database size
          • A data base can be configured to AutoShrink
            • This causes the database to move data pages from the end of the file towards the front of the file and return free space to the OS’s file system
            • Causes index fragmentation.
            • Best practice is to manually shrink and immediately rebuild indexes.
            • DBCC SHRINKDATABASE (N’@DBName’)
      • File Types
        • SQL Server has 3 file types
          • Primary:
            • Data Files with .mdf extension
            • Contain startup information for the database and pointers to other files the DB uses.
            • Can store data and objects
          • Secondary:
            • Optional files used to spread data across multiple disks and increase the size or performance of a database .
            • Multiple files can be used.
            • extension is .ndf
            • Added using
              • Alter Database [@DBName] add file (Name = N’@FileAlias’, Filename= N’@DIrectory\@Filename.ndf’, Size = 4096KB, FileGrowth= 1024KB) To filegroup [primary]
          • Transaction Log:
            • Stores information that can be used to recover the database
            • extension is .ldf
      • FileGroups
        • Used to increase performance when you use partitioning.
          • If the files of a DB are on the same filegroup, you can create a table on the filegroup. This allows the table perform like it’s written on a striped RAID and improves performance.
        • Reduce backup time for partial backups.
        • Add Filegroup using
          • Alter database [@DBName] add filegroup [@FileGroupName]
    • SQL Server Capacity constraints
      • A DB cannot exceed 524,272TB
      • A single DB file cannot exceed 16TB
      • Maximum DBs on an instance is 32,767
      • maximum of 16 columns in a primary key
      • maximum of 253 foreign key references per table
      • maximum of 32767 user connections
    • For windows server 2008 R2 and windows 7, when using a Managed service account for a SQL Server component, You need to create the account before commencing installation.
    • Benchmarking a server for SQL Server usage
      • Run SQLIO to identify hardware or I/O configuration issues before deploying SQL Server.
        • Used to calculate the saturation point of your I/O subsystem( Your I/Os subsystem’s capacity).
      • Run SQLIOSIM to simulate read, write, checkpoint, backup, sort and read ahead activities.
    • Firewall permissions
      • Grant access to the SQL server from the Group Policy management Console.
        • Create a Group policy object and link it to the designated servers organizational Unit
        • Add all the servers to an organizational unit to ease administration
        • Computer Configuration\Policies\Windows Settings\Security Settings\Windows Firewall with an advanced security node.
        • Create an inbound Program rule and allow connection to the SQL Server path(C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\sqlservr.exe).
        • Limit the scope of the inbound rule to the remote IP addresses relevant from the rule’s properties
  • Installing
    • From command prompt
      • Setup.exe /qs /IAcceptSQLServerLicenseTerms /action=@Action /InstanceName= @InstanceName /Features=@Feature /SQLSYSADMINACCOUNTS=”@Domain\@Username” /updateenabled=0
        • @Action can be Install, uninstall, patch, upgrade, rebuilddatabase(to rebuild all system DBs)
        • @Feature can be SQLEngine, IS(integration services), conn(connectivity tools), AS(Analysis services), RS(Reporting services).

Configuring and managing SQL server instances

  • An SQL server instance is a separate deployment of Analysis services, database engine and reporting services hosted on a computer.
    • There can be several instances hosted on one computer
    • Each instance has unique program files, security configuration and directory
    • Up to 50 database engine instances can exist on a host.
  • Configuring instances
    • Instance level settings
      • Memory: Contains settings for minimum and maximum server memory.
        • Limits and ensures the amount of memory is within a specified range.
        • Configured to ensure that the instances play nice with each other.
        • modified using

Exec sys.sp_configure ‘min server memory’, @MinSizeInMB;

Exec sys.sp_configure ‘max server memory’, @MaxSizeInMB;

          • This requires show advanced options to be on

exec sys.sp_configure ‘show advanced options’, 1;


      • Processor and I/O Affinity
        • Processor affinity ties an instance to a specific processor rather than the default of all processors on the host.
          • This eliminates processor reloads and reduces thread migration across processors
          • Modified using

Alter server configuration set process affinity CPU = ([Auto] or [1,5,8])

        • I/O affinity binds an instance’s disk I/O to a specific set of CPUs.
        • You cannot configure processor and affinity and I/O affinity for the same processor.
      • Fill factor
        • This determines the percentage of space on each leaf-level page that is filled with data when an index is created or rebuilt. The remaining space is used for future growth.
        • Can be modified

sp_configure ‘fill factor’, @destinationPercentage;


        • Requires advanced options to be on.
    • The model database serves as the template for all new databases that are created on an instance.
      • All the options and contents are copied over to the new database.
    • Distributed Transaction Coordinator(MSDTC)
      • A distributed transaction is a transaction that runs across multiple servers.
        • Not needed if they’re different instances on the same server.
        • Should be configured as a cluster resource before installing Database engine on the cludter.
    • Database Mail
      • User must have msdb role DatabaseMailUserRole
      • Can be enabled using

sp_configure ‘Database Mail XPs’, 1;


      • Options can be configured using msdb.dbo.sysmail_configure_sp stored procedure.
      • Database Mail profiles determine which users are able to send messages using database.
  • Managing Instances
    • Installing additional instances
      • You can have a maximum of 50 instances on a host or 25 on a failover cluster.
      • All instances have unique files.
      • An instance can contain database engine services, analysis services and reporting services.
    • Deploying patches
      • Shared features must be updated if a single instance is updated
      • Patches can applied/ removed to instances individually or to all the instances.
      • can be done using

.exe /qs /IAcceptSQLServerLicenseTerms /Action= (Patch or RemovePatch) [((/InstanceName = @Instance1, @Instance2) OR /Allinstances)]

        • /InstanceName Or /Allinstances should not be used if you intend to patch only shared components.
  • Resource Governor
    • A feature that enables you limit CPU and memory resource for Database engine in an instance.
    • Resource pools are a subset of the physical resources available to a database engine instance.
      • Hosts one or more workload groups.
        • Workload groups are containers for session request that share similar classification criteria.
        • There are 2 types
          • Internal workload group
          • Default workload group
        • Created using

Create workload group @WorkloadGroupName

using @ResourcePoolName;
Alter resource governor reconfigure;

      • Each session is assigned to a workload group and runs using the resource in the workload group through the resource pool.
        • There are 2 types of resource pools
          • The internal resource pool represents the resources the instance uses.
            • It cannot be altered and workloads in this pool are critical to the server.
            • It can seize resources from other pools
          • The User defined resource pool.
            • Hosts user sessions
        • A resource pool Consists of 2 components
          • Exclusive: doesn’t overlap with other pools. used to define minimum resource reservation for each resource pool. The sum of min across all pools cannot exceed 100%.
          • shared: used to define maximum resource consumption. The max value is 100- @TheSumOfMinAcrossAllPools
        • Created using

create resource pool @resoucepoolName

with @Option;
Alter resource governor reconfigure;

          • @option can be (min_cpu_percent = @Value) and any other option
    • Resource governor classification
      • Allocates incoming sessions to a workload group based on session properties.
      • You can classify sessions into workload groups based on classifier functions.
    • Windows System Resource Manager
      • allows you allocate system resources to specific processes
      • Used to manage resource usage across several instances.
      • Percentage based resource management allows you manage hosts with one processor.
      • Resource allocation policies are can be used to manage resource allocation.
    • SQL Server error logs
      • This stores information about processes that have completed successfully or failed.
      • Sysadmins can cycle the error log using

exec sp_cycle_errorlog;

        • For SQL server agent

exec msdb.dbo.sp_cycle_agent_errorlog

Configuring SQL server components
  • Deploying and configuring analysis services
    • Analysis services can be configured in 2 modes
      • Multidimensional and data mining mode:
        • Default
        • supports OLAP databases and data mining models
      • Tabular mode
        • Supports tabular modeling features
        • Allows hosting of solutions built in the tabular model designer
        • allows access of tabular data model data over the network.
    • The modes can be set during installation using the /ASServerMode option
      • can be set to Multidimensional, tabular or powerpivot.
    • The mode is specific to the instance. A different mode requires a different instance.
  • Deploying and configuring reporting services
    • During installation, the install and configure option is only available if you’ve installed the web server role and are installing/ have installed a database engine on the instance.
    • You can choose to install the reporting services configuration manager.
  • Deploying and configuring sharepoint integration
    • Deploying analysis services and reporting services as shared services in a sharepoint farm allows you use PowerPivot for sharepoint and power view( a reporting services report designer)
      • TO deploy RS, power view and powerPivot requires the following are installed
        • Sharepoint server 2010 sp1
        • SQL server 2012 SQLEngine
        • SQL server 2012 RS and RS add in
        • SQL server 2012 PowerPivot for sharepoint
  • Configuring SQL server integration services security.
    • Integration services is a shared component that allows you run and schedule integration services packages in SSMS.
    • Only members of the built-in administrators group can run integration services.
      • Use DCOM configuration tool to grant user access to SSIS
        • Run dcomcnfg.exe and configure access for the SSIS application.
  • Managing Full-text indexing
    • Full text indexes store information about important words and their locations in the columns of a database table.
      • Only one full-text index can be created per table/indexed view.
      • Created using

create fulltext catalog @FulltextCatalogName;

Create fulltext index on @TableName
( @columnToMonitor1, @columnToMonitor2, @columnToMonitor3)
Key index @UniqueKeyIndexName
on @FulltextCatalogName;

      • Removed using

drop fulltext index on @TableName.

  • Configuring Filestream
    • Filestream allows SQL-Server based applications store unstructured data(e.g images, documents) on the host’s file system.
    • Filestream requires that
      • objects to be stored are greater than 1MB
        • for objects smaller than 1MB, use the varbinary(max) Blob data type
      • The system architecture(x86 or x64) of SQL server is the same as that of the system.
    • To Enable
      • edit the instance in SQL server configuration manager to enable filestream access
      • Set filestream access level to 2

Exec sp_configure filestream_access_level, 2


        • setting it to 0 means no access, 1 allows TSQL only and 2 allows TSQL and win-32 streaming
      • Create a filestream filegroup for the database.

Alter database @DBname ADD filegroup @FileStreamFilegroupName contains filestream

      • Add filestream files to the filestream filegroup by specifying a folder that doesn’t currently exist

Use master

Alter database @DBName add file(
Name= @FileName, FIleName=’@NonExistentDirectory’)
To filegroup @FileStreamFileGroupName

  • Configuring filetables
    • File table are tables that allow you store files and documents within SQL server 2012.
      • Provides a hierarchy of files and directories.
    • To enable
      • Set filestream access level to 2
      • Enable Non-transactional access at the database level

Alter database @DBName set filestream (Non_transacted_access = Full, Directory_name=N’@DirectoryName’

        • Directory_name is the alias for the filestream directory.
    • To create a filetable

Create table @TableName as FileTable;

  • Configuring and managing databases
    • Designing and managing filegroups
      • Each database has a primary filegroup.
        • This host the primary data file and any secondary files not allocated to other filegroups.
        • The system tables are hosted in the primary filegroup
      • File groups are used to group together host data files.
      • When a filegroup contains more than one file, the database engine will write data across the files proportionally; depending on how much free space is available in each file
      • You can specify the filegroup a table or index is created in
      • To add a filegroup

alter database @DBName add filegroup @NewFileGroupName

      • You can move indexes from one filegroup to another.
        • You cannot move indexes created using a unique/primary key.
    • Contained Databases
      • This are databases that include all the settings and metadata required to define the database and have no dependency on the instance.
        • A database can be partially or fully contained.
          • Partially contained databases provide a high degree of isolation from the instance but aren’t fully contained.
      • They allow you move a database between instances easily.
      • To enabled contained databases on an instance,

Exec sp_configure ‘contained database authentication’,1;


      • To convert a database to a contained database

Alter database @DBNAME set containment = @Containmenttype

        • @ContainmentType is either partial or full
      • To create a contained database

create database @DBName

containment = @ContainmentType

    • Using Data Compression
      • Compression allows you save storage space by reducing the size of the database
        • It can be configured on the following
          • Clustered tables
          • Heap tables
          • Non-clustered indexes
          • Indexed views
          • Individual partitions of a partitioned table or
      • It increases the CPU usage because the data must be compressed and decompressed when being accessed.
      • Only enterprise and developer editions support data compression.
      • There are 3 types
        • Row-level:
          • Works by using more efficient storage formats for fixed-length data.
            • Stores fixed length data types as though they are VAR data types
            • not storing NULL or 0 values
            • reducing metadata required to store data
          • Requires less CPU than page-level compression
          • Maximizes the number of rows stored in a page
        • Unicode
          • Enables the engine to compress unicode values stored in page or row compressed objects.
          • used for nchar and nvarchar data types
        • Page-level
          • Compresses data by storing repeating values and common prefixes only once and then making references to those values from other locations within the table.
            • Row compression maximizes the number of rows in the page
            • Column prefix compression is applied by replacing repeating data patterns with references. the data is stored on the page header.
            • Dictionary compression scans for repeating values and then stores this information in the page header.
          • When it is applied, row compression is also applied
      • Done using
        • Tables

Alter table @TableName rebuild with (Data_compression=@compressionType)

        • Index

Alter index @indexname on @tablename rebuild partition all with (data_compression=@Compression Type)

          • @Compressiontype is either page or row.
      • Estimating compression savings can be done with sp_estimate_data_compression_savings
    • Encrypting databases with Transparent data encryption
      • Transparent Data Encryption enables you encrypt an entire database when when backed up.
        • uses a Database Encryption Key(DEK) which is stored in the database boot record.
          • The DEK is protected by the database master key which is protected by the service master key
      • To encrypt a database
        • Create the master encryption key

Use master;

Create master key encryption by password= ‘@Password’;

        • Create the certificate protected by the master key

Create certificate @CertificateName with Subject= ‘@CertificateAlias’;

          • Backup the certificate in the master database. If you lose the DB server, and don’t have the certificate, you cannot access data in the database
            • You need to create a backup of the certificate and private key both of which are required for certificate recovery
Backup certificate ServerCertificate
To file = ‘@DestinationFile’
with private key
( file = ‘@privateKeyName’,
Encryption by password = ‘@PrivateKeyPassword’
); Go
            • The files are written to \mssql\data for the instance.
        • Create a DEK and protect it with the certificate

Use @DBToBeEncrypted

Create database encryption key with algorithm = AES_128
Encryption by server certificate @CertificateName

        • Encrypt the database

alter database @DBName

set encryption on;

    • Partitioning indexes and tables
      • Partitioning divides index and table data across more than one filegroup. Data is partitioned so that groups of rows are mapped to individual partitions.
      • All partitions of a table or index must reside in the same database.
      • Available in enterprise and developer editions.
      • x64 versions support up to 15000 partitions while x86 are recommended to support up to 1000 partitions.
      • Concepts
        • Partition functions define how the rows of an index or table map to specific partitions based on the values of the partitioning columns
        • Partition scheme maps the partitions of a partition function to a collection of filegroups.
        • Partitioning column is the column of an index or table that a partition function uses to partition the index or table
        • Aligned index is an index that uses the same partition scheme as the table to which it belongs
        • Non aligned index is an index that is partitioned independently from the table to which it belongs
        • Partition elimination is the process by which the query optimizer accesses only the appropriate partitions to satisfy a query’s filter criteria.
      • To create
        • Create the filegroup(s) for the partitions
        • Create the partition function:

Create partition function @PartitionFunctionName(int) as range left for values(@ValuetoUseToSplit1,@ValuetoUseToSplit2 )

          • This creates a partition function to split into 2 partitions.
          • @ValueToUseToSplit are values use to partition
        • Create the partition scheme

create partition scheme @PartitionSchemeName

As partition @PartitionFunctionName
To(@Filegroup1, @Filegroup2);


        • Create/modify the table/index and specify the partition scheme

create table @TableName @TableSchema on @PartitionScheme(@PartitioningColumnDefinedInTableSchema)

      • You can alter and merge partitions functions using Alter partition function
        • This allows you split one partition into two or merge 2 partitions into 1.
      • You use alter partition scheme to modify an existing partition scheme.
    • Managing log files
      • Log truncation is the process by which SQL frees space in the logical log for reuse by the transaction log.
        • It occurs if
          • In simple recovery, When a checkpoint occurs.
            • Automatic checkpoints occur when the number of log records reach the number SQL determines it can process during the recovery interval server configuration option.
            • Automatic checkpoints occur when the virtual log becomes 70% full.
            • Manual checkpoints are triggered using the checkpoint statement.
          • In full/Bulk-logged recovery,
            • After a log backup, as long as a checkpoint has occurred after the previous backup.
      • Use DBCC SQLPERF (LOGSPACE) to monitor the amount of log space used.
    • Using Database Console commands(DBCC)
      • they allow you perform Server administration using queries.
      • They are divided into
        • Maintenance statements
          • DBCC Cleantable reclaims space from dropped var-length columns in indexed tables or views
          • DBCC DBreindex: rebuilds one or more indexes for a table
          • DBCC Dropcleanbuffers: removes all clean buffers from the buffer pool
          • DBCC Freeproccache: removes all elements from the plan cache, a specific plan from the plan cache or all cache entries related to a specific resource pool
          • DBCC Indexdefrag: defragments indexes
          • DBCC Shrinkdatabase: shrinks the size of all data and log files of the specified database.
          • DBCC ShrinkFile: shrinks a specified data or log file
          • DBCC UpdateUsage: updates page and row count data for catalog views to remove inaccuracies.
        • Informational Statements
          • DBCC inputbuffer: shows the last statement forwarded from a client to SQLEngine
          • DBCC Opentran: shows information about the longest running transaction
          • DBCC outputbuffer: displays the current output buffer in hex and ascii for a specific session id
          • DBCC procache:provides information about the procedure cache
          • DBCC show_statistics: provides current query optimization statistics for a table or indexed view.
          • DBCC showconfig: provides fragmentation information for tables, views or indexes
          • DBCC sqlperf: displays transaction log space usage statistics for all databases hosted by an instance.
          • DBCC tracestatus: provides information about trace flags
          • DBCC useroptions: provides information about currently set options on the connection.
        • Validation statements
          • DBCC checkalloc: performs a consistency check of disk space allocation structure
          • DBCC checkcatalog: checks catalog consistency of online databases
          • DBCC Checkconstraints: verifies the integrity of a specific constraint or all constraints on a table within the current databases
          • DBCC checkDB: Checks the physical and logical integrity of all objects in a database.
          • DBCC checkFileGroup: verifies the allocation and structural integrity of indexed views and tables in a specified filegroup
          • DBCC checkident: verifies and if necessary, changes the identity value for a specific table.
          • DBCC checktable: verifies the integrity of all pages and structures that make up a table or indexed view.
        • Miscellaneous Statements
          • DBCC dllname(Free): unloads a specific extended SP DLL from memory
          • DBCC freesessioncache: flushes the distributed query connection cache
          • DBCC Freesystemcache: flushes all unused cache entries from all caches
          • DBCC Help: provides infromation on a specific dbcc command
          • DBCC traceoff: disables specific trace flags
          • DBCC traceon: enables specific trace flags
Migrating, importing and Exporting
  • Migrating and upgrading to SQL server 2012
    • Upgrading an instance
      • From SQL server 2005.: Ensure the host operating system can run SQL 2012
        • Apply 2005 SP4
      • From SQL server 2008: Apply SP2
      • From 2008 R2: Apply SP1
    • Upgrade advisor
      • A tool that allows you identify issues that will potentially block an upgrade
      • It checks the services, but not applications or encrypted SPs.
      • Can be run only on hosts that support SQL server 2012.
    • Distributed replay utility
      • Simulates how a test server will function before upgrading to SQL server 2012 by simulating mission-critical workload.
      • Allows you determine if any incompatibilities might occur between the application and SQL server 2012.
    • Performing an upgrade
      • For X64 instances, you need to upgrade AS before database engine.
    • Migrating a Database to a SQL server 2012 instance
      • Allows you move to a SQL server of a different architecture
      • Detach a database
        • This involves removing the database from the instance while keeping the DB and log files.
        • Requires that
          • The DB is not replicated and published
          • The DB is not mirroed
          • The DB has no snapshots
          • It is not a system DB
        • Done using

exec sp_detach_db @dbname= @DBName;

      • Attach a database
        • You can attach a database only if all data files are available.
          • Done Using

create database @databaseName on(filename=’@Directory\@DatafileName), (Filename=’@Directory2\@datafilename2), (Filename=’@directory3\@logFileName) for attach;

        • attaching a database sets the compatibility level to 90 if it was less, otherwise it leaves it as is.
      • Copying a database to other servers.
        • This allows you retain a copy of the database while you deploy a copy to the target.
        • This ensures the original database is still available
        • 3 ways to copy a database
          • Copy database wizard:
            • The database becomes immediately available on the target instance after the transfer completes and is automatically upgraded.
            • Full text indexes are unavailable during the upgrade.
            • Allows you move SQL logins, jobs, SPs and custom error messages
            • Allows you schedule the copy.
            • Cannot copy
              • system DBs
              • replicated, offline, suspect, loading, recovering, inaccessible and emergency mode DBs
            • Requires SQL server agent to be started on destination.
            • There are 2 transfer methods
              • Detach and attach method: Fastest but requires the DB @ the source be taken offline.
                • The DB is reattached for the copy
              • SQL management object method
                • slower but allows the DB @ source to be online
            • Create a credential and an integration Service proxy account to use for the copy database wizard.
          • Backup and restore
            • Prior to this, create the same backup devices on the destination as you use on the source.
            • When restoring a DB backup from a previous version, use the move option
            • The principal who performs the restore becomes the owner of the restored database.
            • Backups don’t have all metadata for the database
              • You must migrate Logins and jobs
          • Publish database using generate and publish scripts wizard
            • Used to publish a DB to a web host provider.
            • Can also publish a script you can use to transfer the DB.
      • Migrating SQL Logins
        • You can use 2 methods besides the copy database wizard
          • Use the generate a script function on the login
            • This allows you to recreate an object in the DB.
            • Allows you transfer windows authenticated SQL Logins between instances
            • for SQL-authenticated logins, the account are disabled and will need to have their passwords reset.
          • Use the microsoft script.
  • Exporting and importing data
    • Large scale data import and export operations are called bulk imports
    • SQL server has several tools
      • The BCP command line utility
        • Allows you bulk export data from a DB to a file or import from a file into tables in a DB.
        • Cannot transfer from DB to DB.
        • The result is unordered as SQL server uses parallel scans unless queryout and orderby are selected.
        • Befroe importing data, ensure the destination table has columns configured that are compatible with the data type you want to import.
        • Y
        • Export done using

bcp “@QueryWithOrderByClause” queryout @DestinationFile -T

          • -T signifies you want to useuse trusted connections.
        • Import done using

bcp @Database.@Schema.@TableName in @FileContainingData -T

        • You can use a format file with bcp by specifying the -f option
      • Bulk insert statement
        • Used to load data from a file.
        • Allow you use a format file
        • Done using

bulk insert @Database.@Schema.@TableName from ‘@Directory\@File’

        • Allow you connect to an OLE DB data source to retrieve data.
        • used in a insert into select from OPENROWSET(Bulk…) clause.
          • … are the parameters
        • Most suitable for one-off importations
        • Allow you use a format file
      • Integration services import and export wizard
        • To migrate data from SQL server and OLE DB providers such as Microsoft access and MS excel
        • Creates an integration services package and allows you perform complex data transformation.
        • Allows you schedule it to run automatically using SQL server data tools.
      • Select into.
        • The into clause allows you to create a new table by using the results of a select statement.
        • The columns of the new table inherit the characteristics of the output columns.
        • Indexes, triggers and constraints aren’t transferred and must be manually created @ the target.
    • Format files are used to map data file fields to the columns of a table.
      • Can be used with bcp, BULK INSERT or OPENROWSET(BULK…) when importing data from a file that uses fixed-with or character terminated fields.
      • Format files can be XML or non-XML.
      • Must be used if you are performing bulk import from a data file with fixed width or fixed-length fields.
      • bcp is used to create a format file

bcp @TableName format nul -c -x -f @Filename.xml-t, T

    • Preparing data for bulk operations
      • To improve performance
        • Make the database logging model not full recovery for the duration of the import.
        • Perform the bulk import operations in parallel
          • If the destination table has no indexes, specify the TABLOCK option.
            • This causes the DB to perform bulk optimization and prevents blocking issues.
        • Disabling triggers and constraints
        • If the destination table has a clustered index, import data from the data file using the same ordering as in the table
SQL Server Logins, Roles and Users
  • Logins are credentials that authenticate connections to an instance.
  • SQL Logins
    • A database user must map to an existing SQL Server login except the instance supports contained databases.
    • There are 4 types
      • Windows-authenticated login
        • Instance logins in which the OS handles authentication.
        • The SQL login is mapped to a OS user account
        • Done Using

Create Login “@DomainOrHostName\@UserName” from windows;

      • SQL server-authenticated login
        • Logins are authenticated by the database engine.
        • Done using

Create Login @UserName with password = ‘@Password’;

        • Can be configure to use password expiration and complexity policies of the host OS using 2 create login option
          • Check_Expiration: use the password expiration of the host OS. can be set to on or off
          • Check_policy: use the password complexity policy of the host OS. can be set to on or off
      • Certificate
        • You can create a certificate to be associated with a login if a database master key is present.
          • To create a master key

create master key encryption by password = ‘@password’

        • To do this
          • Create a certificate and configure it’s expiry

Create certificate @CertificateName

with Subject=’@DetailsOfCertificate_E.G._TheUserOfTheCertificate’

          • Create the login from the certificate

Create login @UserName from Certificate @CertificateName

      • Asymmetric Key
        • Asymmetric keys contain both a public key and a private key.
        • If there is no database master key, you must provide a password when creating an asymmetric key.
        • To do this
          • Create the asymmetric key

create asymmetric key @KeyName with algorithm=RSA_2048;

          • Create the Login from the asymmetric key

create login @Username from asymmetric Key @KeyName

    • Altering existing logins
      • Done using

Alter login @UserName @Option

        • @Option can be disable, unlock and Must_change
    • Viewing logins
      • Query sys.server_principals
      • For sql server authenticated logins, query sys.sql_logins
    • Removing logins
      • Done using

Drop login @UserName.

      • The login musn’t be connected, own a server-level object,SQL server agent or a securable.
      • best practice is to disable the login.
    • Denying server access
      • used to prevent some windows security principals from connecting to the database engine
      • done using

Deny Connect SQL “@Domain\@User”

  • Server Roles
    • They allow you simplify the assignment of permissions at the database instance level.
    • Permissions are assigned to server roles rather than individual logins
    • TO grant a login permissions, you add the login to the role
    • There are 9 fixed server-level roles
      • Sysadmin: Members can perform all activities possible on the database engine instance
      • Serveradmin: Members can perform instance-wide configuration tasks.
      • Securityadmin: Members can carry out instance-level permissions management.
      • Processadminn: Members can terminate process running on an instance
      • Setupadmin: members can add and remover linked servers
      • Bulkadmin: members can use bulk insert on databases on the instance.
      • diskadmin: members can manage instance-related files.
      • dbcreator: Members can create, alter, drop and restore databases on the instance
      • pubic: All logins are members of this role. You can add permissions to this role.
    • You use Alter server role to modify the membership
      • use the add member option

Alter server role @RoleName add member “@Domain\@UserName”

    • To manage server roles
      • sp_helpsrvrolemember
        • Provides fixed server role membership
      • sp_srvrolepermission
        • provides fixed server role permissions
      • is_srvrolemember
        • Enables you check whether a login is a member of a specific fixed or user-defined server role
      • sys.server_role_members
        • Provides information about role members
    • User defined server roles
      • Used to create custom server roles.
      • To create User defined server roles,
        • Create the server role

Create server role @ServerRoleName [Authorization = “@domain/Username”]

          • You can use the authorization to set who owns the server role.
        • Assign permissions to the server role

grant @Permission to @ServerRoleName

          • @Permission is the permission to grant the role e.g. alter any database
        • Add users to the server role
  • Credentials
    • They store authentication information that facilitates a connection to a resource external to the DB engine instance e.g. an FTP server.
    • You can map a single credential to multiple SQL logins.
    • You can map a login to a single credential only.
    • Created using

create credential @CredentialName with identity = ‘@UserName’, secret= ‘@Password’

  • Managing Users and DB roles
    • Database users
      • Database users represent the identity of SQL server logins when connected to a DB.
      • Each DB user maps to a login except for contained databases.
      • You can map a login to one user per DB but different users in different DB
      • Add a database user using

Use @DBName

create user “[@Domain\]@UserName” for Login “[@domain\]@Username”

      • Drop a database user using

drop user “@domain\@Username”

        • Dropping a user doesn’t remove the SQL login
        • Deleting an SQL Login doesn’t delete the database users associated with the SQL login
        • Orphaned users are database users whose corresponding SQL logins has been dropped or the database been moved to a different instance
          • To get orphaned users ,use sp_change_users_login @Action=’report’
    • Database roles
      • They allow you assign permissions to DB users on a collective basis. There are 9 fixed DB roles
      • Fixed DB roles: Have specific permissions that cannot be altered
        • db_owner: Assigned to principals who need to perform all DB config and management tasks. Allows members drop DBs
        • db_securityadmin: Members manage membership of roles.
        • db_acccessadmin: Members manage database access for logins
        • db_backupoperator: Members can back up the DB
        • db_ddladmin: Members can run any DDL command in the DB
        • db_datawriter: Members can insert, delete or modify data in a DB user table
        • db_datareader: members can read all data from all user tables in a DB
        • db_denydatawriter: Blocks members from inserting, deleting or modifying data in a DB user table
        • db_denydatareader: Blocks members from reading data stored in a DB’s user table
      • Flexible DB roles
        • This have custom permissions
        • Created using

create role @RoleName [Authorization=[@Domain\]@Username]

          • Authorization is used to set ownership of the role
        • Assign permissions to a role using

grant @Permission to @role

      • To add a DB user to a role,

Use @DBName

Exec sp_addrolemember ‘@Role’, “[@Domain\]@Username”

        • You cannot use alter role @DBroleName add member as used for instance level users(Alter server role @InstanceRoleName add member)
      • msdb roles
        • msdb has a set of special DB roles in addition to the 9
          • They allow you assign permissions to IS, data collector, server groups, DB mirroring, Server agent and policy-based management
      • User and role catalog views
        • Used to determine information about users, roles and permissions at the DB level
          • sys.database_principals: provides information about DB principals
          • sys_database_role_members: provides information about roles and the security principals who are members of those roles
          • sys_database_permissions: provides information about permissions assigned to security principals at the DB level.
    • Contained Users
      • Contained DBs allow you move a DB to a different instance/ cloud and deploy without additional config
      • A contained user is a DB user who does not have a SQL server login.
        • They connect by specifying credentials in the connection string
      • To create contained users
        • Enable contained DB authentication on the instance

sp_configure ‘show advanced options’,1;

reconfigure with override;
sp_configure ‘contained database authentication’,1;
reconfigure with override;

        • Create the contained users

create user @Contained_user with password = ‘@Password’

          • Partially contained DBs can have windows authenticated users

create user [@Domain\]@Username

    • Least privilege
      • The principle of least privilege is a principle of information security that dictates granting a security principal only privileges necessary to perform required tasks.
    • Application roles
      • This enable you grant permissions to a specific application.
      • Application access data using permissions assigned the application role instead of using a DB user’s permissions.
        • Happens in this way
          • User interacts with a client application that connects to an instance with the user’s credentials
          • The application executes sp_setapprole, authenticating by using a password configured for the application
          • When authenticated, the application role is enabled, and the connection uses the permissions assigned to the application role.
        • Can be created using

Create application role @ApplicationRoleName with password=’@Password

        • can alter the name,password or default schema using alter application role
        • Can be dropped using drop application role
Securing SQL server
  • Database permissions
    • This enable you control which actions can and cannot be performed by security principals at the database level.
    • It involves understanding permissions and how they can be managed
    • Securables
      • An item for which you can assign permissions e.g. users, schemas,roles, objects
      • Permission keywords
        • Alter: provides the ability to modify the properties of a securable.
        • Backup/dump: permission to back up or dump the securable
        • control: all permissions on the securable; can grant permissions on the securable; grants control permissions on all securables within the scope
        • create: Permission to create the securable
        • delete: Permission to delete the securable
        • execute: permission to execute the securable
        • impersonate: permission to impersonate the securable principal.
        • insert: permission to insert data into the securable
        • receive: permission to receive messages from service broker
        • reference: permission to reference the securable
        • restore/load: permission to restore the securable
        • select: Permission to view data in the securable
        • take ownership: permission to take ownership of the securable
        • update: Permission to modify the data stored in the securable
        • view definition: permission to view the securable definition
        • connect: permission to connect to the securable
      • Assigning permissions on objects
        • Use the following to manage permissions on objects
          • Grant: to allow the permission to be used

grant @PermissionKeyword on @DBObject to @DBUser

          • Deny: to block the permission from being used

Deny @PermissionKeyword on @DBObject to @DBUser

          • Revoke: to remove an existing grant or deny permission

revoke @permissionKeyword on @DBObject to @DBUser

    • Managing permissions by using database roles
      • Assigning fixed DB roles
        • The permissions on the fixed DB roles cannot be changed
        • among the fixed DB roles, you can only add the public role to another database role.
          • Other Fixed DB roles do not support this feature

sp_addrolemember @RoleWithDesiredPermission, @RoleToBeEnhanced

alter role @RoleWithDesiredPermission add member @RoleToBeEnhanced

    • Using Flexible DB roles
      • This allows nesting of roles.
    • Protecting objects from modification
      • This involves configuring permissions so only authorized principals can make modifications.
      • To carry it out on the DB level, add security principals to the db_denydatawriter role.
        • This applies to the whole DB
      • To protect specific database objects, use deny with alter, control, delete, insert, take ownership and update permissions

Deny alter on @DBObject to @SecurityPrincipal

Deny control on @DBObject to @SecurityPrincipal

      • Placing objects in read-only filegroups doesn’t stop all forms of modification
    • Using Schemas
      • Schemas simplify application of permissions by enabling you collect securables into containers.
      • Objects belong to schemas and schemas belong to security principals.
      • You can create a flexible role, create a schema owned by the role then add principals to the role to grant them permissions over objects within the schema.
        • You can alter the schema of securables in the same databases

Alter schema @DestinationSchema transfer @SourceSchema.@Object;

    • Determining effective permissions
      • Use HAS_PERMS_BY_NAME() to determine a specific principal’s effective permission on a certain securable.
        • An effective permission can be one of the following
          • Permission granted directly to the security principal
          • permission is granted to a role of which the security principal is a member
          • Permission is implied by a higher-level permission held by the security principal.
          • Permission is not denied to the principal directly or indirectly through role membership.
        • HAS_PERMS_BY_NAME() returns 1 if the principal has an effective permission and 0 if it doesn’t. It returns Null if neither the permission nor securable class is valid.
        • It runs in the context of the current principal.
  • Troubleshooting SQL server security
    • This involves ensuring that the security settings configured for an instance or database match the security requirements for that instance or database.
    • Troubleshooting authentication
      • query sys.server_principals to determine if a login is disabled
      • use alter login to configure login permissions
        • use must_change parameter to force a password change upon next logon
        • use unlock parameter to unlock a user’s account
      • Authentication modes
        • Windows authentication mode disables SQL login. When selected during install, the sa account is disabled
        • Mixed authentication mode allows both windows and SQL login. This requires a strong password be assigned the sa account
    • Client connection problems
      • The browser service helps clients connect to instances that are not using fixed ports.
      • If users cannot connect to named instances, this is the most likely cause
    • Troubleshooting certificates and keys
      • A common problem is expired certificates.
      • Ensure keys and certificates are regularly backed up because encrypted data is unrecoverable if keys and certificates become corrupt.
      • To see information about them, query
        • sys.asymmetric_keys:
        • sys.certificates:
        • sys.key_encryptions: provides info about each symmetric encryption specified when using create symmetric key with the encryption by parameter
        • sys.symmetric_keys:
    • Troubleshooting endpoints.
      • You need to understand the precise configuration of the endpoint that is causing the issue. To do this, query
        • sys.database_mirroring_endpoints
        • sys.endpoints
        • sys.http_endpoints
        • sys.service_broker_endpoints
        • sys.tcp_endpoints
      • When troubleshooting security endpoints, you might need to alter the authentication or encryption settings.
      • An endpoint can be in a stopped state.
      • You can configure an existing endpoint using alter endpoint
        • To start a stopped endpoint, use state parameter
        • To change the endpoint authentication, use authentication parameter
        • To enable, disable or require encryption, use the encryption parameter
      • use the grant statement to grant permissions on an endpoint to a login.
      • When configuring permission on endpoints, ensure the remote instance can authenticate the local instance using windows authenticated logins or certificates.
    • Security catalog views
      • They are used to view the properties of server and database security. They include
        • sys.server_permissions
        • sys.server_principals
        • sys.server_role_members
        • sys.sql_logins
        • sys.system_components_surface_area_configuration
      • To view security config at the individual DB level, use the following
        • sys.database_permissions
        • sys.database_principals
        • sys.database_role_members
        • sys.master_key_passwords
  • Auditing instances
    • auditing enables you track activity on a SQL server instance.
    • SQL Server audit
      • It enables you to configure sophisticated auditing at the instance and database levels
      • It forwards audit results to a target/Audit destination. the target can be
        • A flat file
        • Application log
        • security log
      • To create a server audit

Create server audit @AuditName

To @Target
( Queue_delay=@DelayDuration, On_Failure=@AuditFailureAction)

        • @Target can be the directory to the file, application_log or security_log
        • @DelayDuration is the delay in ms before audit actions myst be processed.
        • @AuditFailureAction determines the action to take when there is an audit log failure. If set to shutdown, the instance shuts down and might require the -f service parameter to startup again.
      • SQL server audit action groups and actions
        • This are used to determine the kind of actions Server audit records.
        • you can audit the use of specific actions on DB schema and schema objects with DB audit specifications
      • To create a server audit specification

create server audit specification @AuditSpecificationName

For server audit @AuditName
Add (@ServerAuditActionGroup)

      • To alter a server audit specification

alter server audit specification @AuditSpecificationName


        • To turn on the audit specification

alter database audit specification with (State = on)

      • To create a DB audit specifciations

create database audit specification @AuditSpecificationName

For server audit @AuditName
ADD ( @Action on Object::@Schema.@Object by @SecurityPrincipal)

        • To turn on the audit,

alter database audit specification with (State = on)

        • To create, alter or drop a DB audit specification, a security principal must have permission to connect to the db and must be assigned either alter any DB audit permission or alter or control permission on the DB
      • Viewing audit views and functions
        • sys.dm_audit_actions
        • sys.server_audits
        • sys.dm_server_audit_status
        • sys.server_audit_specifications
        • sys.server_audit_specifications_details
        • sys.database_audit_specifications
        • sys.database_audit_specifications_details
        • fn_get_audit_file()
    • C2 Audit Mode
      • This configures the instance to record both successful and failed attempts to access statements and objects as per the C2 security standard
      • Enabled using

sp_configure”c2 audit mode”,1;


      • c2 audit data is store in the instance data directory.
    • Common criteria compliance
      • Common criteria compliance supersedes c2 audit mode. This information is viewed by querying sys.dm_exec_sessions.
      • To enable

sp_configure “common criteria compliance enabled”, 1;


    • Policy based management
      • This provides you with a method of managing more than one instance across multiple servers.
      • Can be used to manage, instances, databases, and other objects like backup devices
      • Objects to configure policy-based management
        • Facet: A collection of properties that can be configured
        • Condition: The settings that can be configured for the property
        • Policy: A condition that should be enforced.
        • Category: A collection of collections that should be enforced together.
        • Target: Defines the scope of policy application. can include instances, databases or DB objects
      • Policies can be exported and imported between instances.
      • You can create a central management server for applying policies on one instance to all instances.
      • To view information about policy based management
        • syspolict_conditions
        • syspolicy_policies
        • syspolicy_policy_execution_history
        • syspolicy_policy_execution_history_details
        • syspolicy_policy_group_subscriptions
        • syspolicy_policy_categories
        • syspolicy_system_health_plan
Mirroring and replication
  • Mirroring is the process of creating and maintaining an always up-to-date copy of a database on another SQL-Server instance. If a witness instance is configured, automatic fail over is possible if the principal instance becomes unavailable
  • DB mirroring
    • A strategy for increasing the availability of a database.
    • DB mirrors are paired copies of a single DB hosted on separate instances of SQL server.
    • All operations performed against the principal DB is redone on the mirror through a stream of active transaction logs the engine applies in sequence immediately after it receives them.
    • Mirroring can be configured in 2 modes
      • High-safety mode: Enables failover to occur with data loss from committed transactions. Transactions are committed on both partners after they are synchronized.
      • High-performance mode: data loss is possible. The primary instance doesn’t wait for the mirror instance to confirm it has received the log record
    • Mirroring can only occur for user databases
    • Only databases with full-recovery model can be mirrored
    • Mirroring supports only a single principal instance and a single mirror instance.
  • Mirroring pre-requisites
    • All instances must run the same version and edition, except the witness which can run any edition.
    • Mirroring restrictions
      • Mirrored databases cannot be renamed
      • Mirroring cannot work for a DB with Filestream filegroups.
      • Filestream filegroups cannot be created on the principal server.
      • Cross database and distributed transactions are not supported for mirrored databases
    • Before mirroring can occur, you must restore a database using the Norecovery option on the mirror instance.
      • Apply transaction logs with Norecovery option before mirroring is enabled.
    • Firewall rules
      • traffic must be permitted through the endpoint using port permissions for the port configured
    • Configuring mirroring with Windows authentication
      • This is possible if the service accounts on all servers are in the same active directory domain.
        • Otherwise, a login in the master database for the accounts used by the other must be created on all sides.
    • Configuring mirroring with certificate authentication(This sets the database to mirror in high-safety mode)
      • Used when the the service accounts aren’t on the same domain.
      • Create certificates and the endpoint
        • Create the certificate, back them up and create a user on the partner instance
          • Create the certificate

create certificate @CertificateName with subject=@CertificateDescription

          • Backup the certificate on both instances

backup certificate @certificateName to file=’@directory\@File.cer’;

          • Create a user on the partner instance after moving the backed up certificate to @directory on the partner

Create login @SQL_Login with password=@Password;

create user @UserName for login @SQL_Login;
create certificate @PartnerCerficateName
Authorization @UserName
From file = ‘@directory\@FileName.cer’

        • Create the endpoint on both instances

create endpoint @EndpointName

as tcp( listener_port= @PortNumber, listener_ip=all)
for database_mirroring(authentication= certificate @CertificateName, encryption = required algorithm AES, role=all);

          • Grant connection permissions to the endpoint on partner instances

grant connect on endpoint::@endpointName to @partnerSQL_Login

      • Configure mirroring on the database on partner instances

Alter database @DBName

Set partner = ‘TCP://@partnerName.Domain.com:@PortNumber

    • Changing Operating Modes
      • You can change mirroring operating mode on the mirroring page of the database’s properties. You can change to high safety with automatic failover mode only if a witness server is present.
      • Done using Alter database statement with the safety option
        • Setting safety to full configures the database for high-safety mode. This is the default mode for DB mirroring
        • Setting the safety option to off configures the DB for high-performance mode.
      • You can add a witness to a mirroring pair by creating the endpoint on the witness, creating logins for the principal and mirror instance service accounts and granting connect privileges on the endpoint to that login.
        • Also, create a login on the principal and mirror instances for the domain account used for the SQL Server service on the witness.
        • To specify the witness

Alter database @DBName

Set witness = ‘TCP://@WitnessServerName:@PortNumber

    • Role switching and failover
      • There are 3 types of role switching
        • Manual failover
          • Available if the mirroring session is configured in high-safety mode.
          • Can be initiated when the mirroring session is synchronized.
          • Initiated on the principal instance using the command

alter database @DBName set partner failover

        • Automatic failover
          • Possible is a witness is present and the mirroring session is configured for high-safety mode.
          • Requires the mirroring session is in a synchronized state
          • If all instances lose communication, automatic failover will not occur even if the witness and the mirror establish communication.
        • Forced service( with possible data loss)
          • Can be used in high-safety mode when no witness is present or in high-performance mode. The mirror is forced to become the principal when the original principal is not available.
          • Can result in data loss.
          • Initiated on the mirroring instance using the command

alter database @DBNames set partner force_service_allow_data_loss

    • Monitoring mirrored DBs
      • Query sys.database_mirroring on either principal or mirroring instance.
      • Use the Database mirroring monitor.
        • Allows you monitor how data is being transmitted between the primary and mirrored instances in a mirroring session and other details of the mirroring session
      • use SPs
        • sp_dbmmonitorresults
        • sp_dbmmonitorchangemonitoring
        • sp_dbmmonitordropmonitoring
        • sp_dbmmonitoraddmonitoring
    • Upgrading mirrored DBs
      • Requires the following
        • Change the operating mode to high-safety without automatic failover
        • Perform a full backup of the principal server
        • Run DBCC CheckDB on the principal server
        • Upgrade the mirror
        • Manually failover to the upgraded mirror
        • Upgrade the Witness
        • resume mirroring the DB
        • Switch back to high-safety without automatic failover
        • Upgrade the new instance to SQL server 2012.
Database replication
  • Replication enables SQL server distribute and synchronize data and DB objects from one DB to another
  • Replication components
    • Publisher: An instance that makes data available through publication
    • Article: A published object
    • Publication: A collection of articles
    • Distributor: An instance that manages the transmission from publisher to subscriber.
    • Subscriber: An instance that receives publications
    • Agent: A service that enables the publisher, distributor or subscriber perform replication-related tasks.
  • Replication Types
    • Snapshot replication:
      • Allows complete refreshes of data rather than incremental updates
      • Publisher synchronizes with subscriber by creating and transmitting a DB snapshot.
        • The snapshot are generated using bcp
        • They are stored in a folder that the subscriber need access to for pull subscriptions instead of push subscriptions.
      • Suitable for DBs with infrequent use
    • Transactional replication:
      • Allows subscriber to stay up to date with changes on the publisher
      • Changes to the publisher propagate to the subscriber as they occur.
      • Replication occur from the publisher to the subscriber.
      • Update occur at the publisher
      • Allows Oracle database publish to MSSQL
      • Changes must be applied to the subscriber in the order they occurred in the publisher
    • Peer to peer replication:
      • Enables peer nodes to read and write changes and have those changes propagate to other nodes in the replication topology
      • You prevent conflicts by partitioning the data so the same row will not be updated at separate locations.
        • if the data isn’t partitioned properly and a conflict occurs, the replication fails or the changes will not replicate.
      • Requirements
        • All instances must run SQL-Server enterprise edition.
        • Row and column filtering aren’t allowed
        • Publication names must be identical on all instances participating in the replication
        • Each participant should use its own distribution DB to ensure there is no single point of failure
        • Tables in multiple peer to peer publications cannot be included in the same publication DB
        • peer to peer subscriptions cannot be reinitialized. You must restore a backup at a node to force data update
        • If you add new nodes and have to perform a restore, you must only use backups created with the new nodes in the topology
    • Merge replication:
      • For scenarios where data conflict is possible and any node may issue data changes that will eventually consistent across all nodes.
      • Allows databases hosted on instances at separate locations to be updated and those changes replicated to all other DBs in the replication topology.
      • Publisher and subscribers track changes made using triggers.
      • synchronization involves exchanging all rows that have changed between the publisher and subscriber since the last synchronization.
      • Requirements
        • Subscriber can make offline changes to data and synchronize those changes back when they connect again.
        • You can detect and resolve update conflicts
        • You must support non-SQL server nodes participating in the replication
        • Applications do not require tables to be transactionally consistent
      • It uses SQL server snapshot agent and the merge agent for conflict resolution.
        • If the merge agent detects a conflict, e.g a row being updated with different values at different locations, the conflict resolver determines which data is accepted.
        • When configuring a subscription in merge replication, you specify a subscription type and a conflict resolution priority.
          • Servers assigned a higher priority override servers assigned a lower priority.
          • When you use client subscription type, the priority for conflict resolution is first to publisher wins,
          • When a conflict occurs between a publisher and subscriber, the publisher change is kept and the subscriber change is dropped.
  • Replication monitor
    • Can be used to monitor replication performance.
    • Can be used to configure alerts based on performance benchmarks.
  • Size limitations for Replication
    • 256 merge publication articles
    • 32767 Snapshot or transactional publication articles
    • 246 Merge publication columns in a table
    • 1000 snapshot or transactional publication columns in a table
    • 1024 merge replication bytes for a column used in a row filter
    • 8000 snapshot or transactional publication bytes for a column used in a row filter
  • Controlling replication of constraints, columns and triggers
    • The not for replication option enables you configure Foreign key constraints, check constraints, identity column values and triggers behave differently when an insert, update or delete operation is performed by a replication agent than they behave in normal operation.
      • They will not be enforced for the operation with that option.
  • Heterogeneous Data.
    • You can integrate to Oracle and DB2
      • Oracle can publish to DBs on SQL server
        • The data can be published on the SQL server to serve other DBs on Oracle, DB2 or SQL Server.
      • SQL can publish or distribute data for replication on oracle and DB2
    • This doesn’t support Merge or peer to peer replication.
Clustering and Always On
  • Failover clustering is an approach to high availability that ensures a database remains available in the event of a server failure.
    • A failover cluster instance is a deployment of SQL server that stores DB files on a shared storage device. If the server that hosts an instance fails, another Instance in the cluster takes control of the files and seamlessly continues to provide service.
      • The DB files must be on a SAN
      • Connect the server to the SAN using the iSCSI initiator
    • Failover clustering requires the windows failover cluster feature.
    • Installing a SQL failover cluster
      • During installation of SQL server, select advanced then advanced cluster preparation
      • Conclude installation of the instance
    • Adding a node to the cluster
      • During installation of SQL server, select advanced then advanced cluster completion
    • To perform a manual failover,
      • using powershell

move-clustergroup @ClusterResource @DestinationServer

    • Troubleshooting failover clusters
      • If failover occurs and a node is irrecoverable,
        • evict the node from the failover cluster
        • Replace the hardware and add the node back to the cluster
        • run SQL setup to admit the node to the failover cluster instance.
  • AlwaysOn is a technology that replaces DB mirroring. It allows clients read-only access to the secondary replica
    • An availability group is a collection of user DBs(availability DBs) that can fail over together.
    • Supports a set of read-write primary databases and up to four sets of secondary databases.
    • They allow you configure one or more sets of secondary databases so that they are accessible for read-only operations
    • Failover occurs on a per-replica basis, and all DBs in the replica fail over.
    • Availability group prerequisites.
      • All hosts must be nodes in a failover cluster.
    • Availability modes
      • The availability mode selected depends on data loss and transaction latency requirements.
      • There are the following availability modes
        • Asynchronous-commit : Suitable when you must place availability replicas at different location.
          • The primary will not wait for the secondaries to write log records to disk. This reduces transaction lateny.
        • Synchronous-commit : increases transaction latency but minimizes the chance of data loss in the event of an automatic failover
          • Each transaction is applied to the secondary replica before being written to the local log file.
      • You can alter availability mode using the alter availability group statement

alter availability group @AvailabilityGroupName modify replica on ‘@Host\InstanceName’ with (availability_mode=@availabilityMode)

    • Selecting failover modes
      • Availability groups fail over at the availability replica level.
      • Failover involves another instance becoming the primary replica, with the original primary replica being demoted to become a secondary replica. There are 3 types
        • Automatic failover:
          • Occurs without administrator intervention.
          • Involves no data loss
          • requires the primary and a secondary to be configured with a failover mode of automatic and be synchronized.
          • Can occur only if the primary and replica are in synchronous-commit mode.
        • Planned Manual failover
          • Triggered by an administrator
          • Involves no data loss
          • Can occur if at least one of the secondary replicas is in a synchronized state.
          • Require primary and replica instances to be in synchronous-commit mode.
          • To carry out manual failover,
            • TSQL(run the following on the proposed primary instance)

alter availability group @AvailabilityGroup failover;

            • Powershell(run on the proposed primary instance

switch-sqlavailabilitygroup -path SQLSERVER:\SQL\@HostName\@FailoverClusterName\AvailabilityGroups\@AvailabilityGroupName

        • Forced manual failover
          • This form of failover involves the possibility of data loss. Use forced manual failover when no secondary replica is in the synchronized state or when the primary replica is unavailable.
          • Can work if either node is set to asynchronous-commit mode.
          • Done using
            • TSQL(run the following on the proposed primary instance)

alter availability group @AvailabilityGroup force_failover_allow_data_loss;

            • Powershell(run on the proposed primary instance

switch-sqlavailabilitygroup -path SQLSERVER:\SQL\@HostName\@FailoverClusterName\AvailabilityGroups\@AvailabilityGroupName -AllowDataloss

    • Configuring readable secondary replicas
      • Readable secondary replicas can service read-only requests for DB access
    • Deploying Alwayson availability groups
      • Requires the following tasks
        • create a mirroring endpoint:
          • Not necessary if using a domain account for all SQL server service, as it is automatically created
        • enable alwaysOn on the instance
          • From configuration manager. Enable AlwaysOn on the instance.
          • Powershell:

enable-sqlalwayson -path SQLSERVER:SQL\@Hostname\@Instance

        • create an availability group
          • From SSMS
            • Use the always on availability node.
            • To add an encrypted DB or a DB with an encryption key, use add a secondary replica.
          • TSQL

create availability group @AvailabilityGroupName

for database @DbName
replica on ‘@ReplicaHostName1\@InstanceName’ with
( Endpoint_URL= ‘TCP://@ReplicaHostName1[.@Domain.com]:@Port’,
availability_mode= @AvailabilityMode, Failover_mode= @FailoverMode),

‘@ReplicaHostName2\@InstanceName’ with

( Endpoint_URL= ‘TCP://@ReplicaHostName2[.@Domain.com]:@Port,’
availability_mode= @AvailabilityMode, Failover_mode= @FailoverMode);
        • create an availability group listener
          • An availability group listener is a network connectivity endpoint for an availability group.
          • Clients connect to the listener, which connects them to the availability group’s primary instance.
          • To create,

alter availability group [@AvailabilityGroupName] add listener ‘@ListenerName’ (with IP((‘@ListenerIP’, ‘ListenerSubnet’)), port=@portNumber);

        • add a secondary replica
          • Requires some conditions
            • fewer than four secondary replicas
            • primary replica is online
            • Secondary replica’s instance can connect to the primary replica mirroring endpoint
            • Secondary replica instance has enabled alwaysOn
          • To add

alter availability group @AvailabilityGroupName join;

Troubleshooting SQL server
  • Performance monitor
    • A windows utility for capturing statistics about the hardware, OS and apps to expose properties and counters.
    • Collects and displays real-time performance data in the form of counters for server resources e.g. processor and memory use.
    • Data collector sets provide the ability to group data collectors into reusable elements for different performance-monitoring scenarios.
      • They organize data collection points into a single component that can be used to review or log performance.
      • Can be recorded individually, grouped with other into logs, and used to generate alerts.
      • They can contain
        • Performance counters
        • Event trace data
        • System configuration information(registry key values)
    • Counters are organized into a 4 level hierarchy
      • An object is a resource that can be monitored.
        • a component, application or subsystem within an application.
      • an object exposes one or more counters
      • a counter might have several instances if more than one resource of that type exists e.g. multiple processors
    • When you define counters to capture, you can specify criteria at any level within the hierarchy.
    • Counters are used in conjunction with other counters and environmental info to diagnose problems
    • 3 Counters indicate a system problem on their own. Their values determine how many requests are waiting for the resource to become available. When there values are greater than 0 for a long period, it indicates a hardware bottleneck
      • System processor Queue Length
        • Indicates when the processor is overwhelmed with activity
      • Network Interface output queue length
        • Indicates when the network interface is overwhelmed with activity
      • Physical disk avg. disk queue length
        • Indicates when the disk is overwhelmed with activity
  • SQL server profiler
    • Most performance issues can be improved by tuning poor performing queries. SQL server profiler helps identify where to focus query tuning efforts.
    • SQL server profiler and extended events profiler provide the ability to trace the activity that is occurring in SQLEngine.
      • Server profiler can also trace AS activity.
      • Traces captured by both can be used to diagnose performance issues and replay workloads.
      • Profiler can be configured to filter events but they have a performance cost.
        • However, they minimize the overhead that is incurred during tracing
    • Captured events are displayed graphically and can be captured to a file or DB table.
      • creating trace files is an efficient way to store traces using minimal system resources.
    • A category is a group of related event classes. An event class contains all the data columns that can be reported by an event.
      • An event is the occurrence of an action within an instance of the SQLEngine and is defined by its attributes.
    • Trace columns represent data that can be captured when an event occurs. Data columns contain the attributes of events
    • Runs on a machine and communicates to the SQLEngine using SQL Trace procedures.
    • SQL Trace is a feature that runs in SQLEngine and runs traces using sys.SPs.
      • It is lightweight
      • Trace must be defined using system stored procedure calls
      • runs inside the SQLEngine
      • Do not automatically restart if the instance is restarted. To auto start it, script it and put it in a startup procedure.
      • SQL has a default trace that starts with the instance.
        • This includes SQL server events
        • objects being created or dropped.
      • Traces can be opened using profiler and imported into a table.
  • Monitoring SQL server
    • Activity monitor is a graphical representation of monitoring information that uses DMVs and DMFs
      • It provides an overview of the system performance and information about processes, waits, I/O resources and expensive queries.
    • Dynamic management views(DMVs) and dynamic management functions(DMFs) provide insight into current and historical SQL server activity.
      • They return server state information that can be used to monitor the health of an instance, diagnose performance and tune performance.
      • There are 2 types of DMVs and DMFs.
        • Server-scoped DMVs and DMFs
          • They require View server state permission on the server
        • Database-scoped DMVs and DMFs
          • they require view Database state permission on the database
      • common DMO categories
        • sys.dm_exec_*: info about connections, sessions, requests and query execution
        • sys.dm_os_*: info about OS related stuff
        • sys.dm_tran_*: access to transaction management
        • sys.dm_io_*: provide information on I/O processes
        • sys.db_db_*: provide database scoped information.
    • Querying DMVs
      • DMV/Fs cannot be referenced using one part names.
      • They exist in the sys schema and are named in the convention dm_*
      • They expose in-memory structures, which are cleared if the instance is restarted.
    • There are 2 types of DMOs
      • Objects that provide real-time state information
      • Objects that provide recent historical information
  • Data collector
    • Provides a high-level view into aggregated trends over time and information to identify issues.
    • Allows you capture historical data and access detailed data when investigating issues.
    • Provides a central point for data collection across DB servers and applications.
    • Stores collected data in a relational database known as the management data warehouse(MDW).
      • Data can be collected constantly or on a schedule
    • When you configure the MDW, some system data collection sets are created.
    • Data collection sets
      • This define the data that needs to be collected, how often they are uploaded to the warehouse and how long they are retained in the warehouse.
      • They can query DMOs to retrieve detailed information about the operation of teh system
      • Can retrieve performance counters that provide metrics about the performance of both SQL server and the entire server
      • Can capture SQL trace events.
    • Data collector topology: The data collection system consists of 3 components
      • The data collector: a set of jobs that run on the local server and collect data from DMOs, performance counters and SQL trace events and upload to the MDW
      • The MDW which can consolidate data from multiple instances
      • 3 standard reports and several subreports to analyze data in the DMW. You and also write your own reports.
    • You create a MDW for
      • access to reports that combine information for all server instances in your enterprise
      • Offload the need to hold collected data and report on it from production servers
      • Persist data stored in DMOs
    • configuring the data collector: this involves 2 actions
      • Configure the MDW
      • configure the instances to upload the data
        • Lower volume data is sent immediately to the warehouse
        • Higher-volume information is cached locally and uploaded to the warehouse using SSIS
    • Monitoring data collector
      • configuration and log information for the data collector is written to msdb and is implemented by SPs and SSIS logging features.
    • Analyzing collected performance data
      • After performance data has been collected from a number of instances and been consolidated into a MDW, the data can be analyzed using reports built with SSRS.
      • There are 3 default reports
        • Server activity history
          • contains CPU, memory, disk and network I/O. SQL server waits and SQL server activity
        • Disk Usage History
          • Trends and details of disk and file usage
        • Query statistics history
          • Most expensive queries ranked by CPU, duration, reads and writes
  • Identifying Bottlenecks
    • Monitoring Disk Usage
      • There are 2 counters for disk activity on the OS level
        • PhysicalDisk:% Disk Time
        • PhysicalDisk:Avg. Disk Queue length
      • There are 2 counters for disk activity by SQL server
        • SQL server: Buffer manager: Page reads/sec
        • SQL server: Buffer manager: page writes/sec
        • If the values for these counters approach the capacity limit of the hardware I/O subsystem, tune you application or DB to reduce I/O Operations( better indexing, normalization).
    • Monitoring Memory usage
      • There are 2 counters for memory at the OS level
        • Memory: Available Bytes
        • Memory: Pages/Sec
          • this indicates the number of pages that were either retrieved from disk due to hard page faults or written to disk to free space in the working set due to page faults.
      • Counters for memory usage by SQL server
        • Process: working set
        • SQL Server: Buffer Manager: Buffer Cache Hit Ratio
        • SQL Server: Buffer Manager: Total pages
        • SQL Server: Memory Manager: Total Server Memory(KB)
    • Monitoring CPU Usage
      • CPU usage can be monitored using
        • Single Processor systems
          • processor: %Processor time counter
            • Measures the amount of time the CPU spends executing a thread that isn’t idle.
        • Multi processor systems
          • System: % Total processor time counter
            • This takes the average for all the processors. To monitor a single processor, use the instances of the counter
Indexes and concurrency
  • Implementing and maintaining indexes
    • A balanced tree structure consists of the start of the tree i.e. the root node.
      • Beneath the root node are 1…n intermediate level pages
      • each intermediate-level page can have additional intermediate-level pages.
      • the last level comprises the leaf-level pages.
      • Query optimizer knows at most how many page accesses are needed to reach a leaf page.
      • When a page is full, it is split to maintain this balanced condition.
    • A heap is an unorganized collection of data. It is appended to the last data page when presented for storage.
      • This makes it terrible for data retrieval but wonderful for data loading.
    • Clustered and non-clustered indexes can be partitioned to provide better performance.
    • Types of Indexes
      • Clustered
        • A structure in which the actual table data resides in the leaf-level pages of the index. The data is physically sorted in the order of the clustering key.
        • The clustered index key(CIK) is the row identifier for the table data.
          • It is added to any non-clustered indexes created on the table.
          • The key should have the following specifications.
            • Unique
            • Narrow
            • Static: Not updated.
      • Non-clustered
        • It contains only keys and no actual data.
        • In addition to the key columns, SQL server stores the clustering key for that row in the table.
          • This allows the storage engine to use a non-clustered index to reduce the number of rows returned and then use the clustering key to go to the clustered index to retrieve the data requested.
        • They can be applied to heaps or clustered table. The difference is in how rows are identified
          • In clustered tables, the non-clustered index contains the clustered index key to locate the rows in the base table
          • In heaps, The non clustered index uses a row identifier in place of the CIK.
      • Covering indexes
        • A non-clustering index that contains all the information needed to satisfy a query
        • Allow adding information to the non-clustered index data pages and avoid having to look up the row in the clustered index.
      • Filtered indexes
        • Basically an index with a where clause. This allows you limit the amount of data in the B-Tree.
      • Primary XML and secondary XML indexes
        • To provide reasonable performance when querying XML data.
        • If not created, the entire XML document must be loaded into an XML DOM and searched
      • Spatial Indexes
        • Improve performance when querying data of geography and geometry data types
      • Full-text indexes
        • Enable the efficient searching of text data, often using words that occur near each other in the text
      • Columnstore indexes
        • The index uses a column oriented storage that is very efficient for data warehouse operations.
        • Doesn’t use the B-tree structure.
    • Designing indexes for efficient retrieval
      • When constructing multicolumn indexes, list equality columns( columns instead in a where clause with =) before inequality columns. List the most selective columns for the above types first before the least selective columns.
      • When joining, index the column used to join to the other table.
      • Clustered indexes
        • A clustered index is the most efficient index for retrieval with
          • Queries that return a majority of the columns in the table
          • Queries that return a single row based on the CIK
          • Queries that return range-based data
      • Non-clustered indexes
        • A Non-clustered index is the most efficient index for retrieval with
          • Queries that return few rows
          • Queries that can be covered by the index.
        • They have a maximum limitation of 16 key columns and 900 bytes
      • Covering indexes
        • Consider using include columns for columns in the select list that aren’t already part of the key columns
      • Filtered indexes
        • A filtered index is the most efficient index for retrieval
          • When combined with sparse columns to locate specific non-null rows
          • When queries that are a small subset of the rows are selected often
      • Primary XML and secondary XML indexes
        • The first XML index must be a primary XML index type.
          • The primary XML index builds a nodes table.
          • For each element in the XML document, there will be one row in the nodes table. If you have many rows, each of which contain an XML document with many nodes, the size of the XML index can be quite large.
        • There are 3 secondary XML index types
          • Path
            • Consists of a regular non-clustered index.
            • supports operations such as .exist()
          • Property
            • Consists if a regular non-clustered index
            • Useful for searching for nodes that have particular values
          • Value
            • Consists of a regular non-clustered index
            • The reverse of the path index
            • most useful for searching for descendant nodes for a given path
      • Full-text indexes and semantic searches
        • Special indexes on char, text, image, xml and varbinary data types.
        • These are token based functional indexes that store information about significant words, where they exist within text within a column or table in the DB
        • Only one is allowed per table, and it contains all the tokens for all columns that were indexed for that table.
          • This allows you search for words in one column that have significant words in other columns in an efficient manner
        • Allows you carry out inflectional searching.
      • Columnstore indexes
        • They are a non B-tree type of index.
        • used to provide performance improvements for data warehouse type queries that perform aggregations over large data sets.
          • This queries mostly use a few of a table’s columns to satisfy queries. They use column-based index structures
        • They optimize storage by compressing repeating data values
    • Statistics
      • Statistics are objects in the database that contain statistical information about how the values in a column are distributed. Query optimizer uses this information to determine the cardinality of the rows.
        • cardinality refers to number of rows that exist for a given value.
      • When you create an index, SQL Server creates a histogram of the distribution of values in the leading column of the index.
      • Statistics can be either a sampling of the rows or all rows, based on how they were created or updated and the number of rows in the table.
    • Updating statistics
      • statistics are manually updated using update statistics

update statistics @TableName with @Option

        • @option can be
          • fullscan[[, index][,Columns]]
          • Sample [@NoOfRows Rows]|[@percent percent]
    • Index internals
      • There are DMVs that can be used to retrieve information about indexes. e.g. sys.dm_db_index_physical_stats
    • Creating Indexes

create @Indextype Index @Indexname

On @Table(@ColumnList);

    • Fill factors
      • An option when creating indexes
      • They determine how much free space will be left on the leaf data pages.
      • Default is 0(Leave no free space)
        • When considering change, determine the data access pattern of the table.
          • If the CIK is an identity column, this increases the amount of space used by the system.
          • If the CIK is a random value, it might limit page splits.
      • To specify an fill factor, rebuild the index

alter index all on @TableName @DefragmentationType

with (fillfactor=@PercentageToFillPageTo)

        • @PercentageToFillPageTo can be 80 if you want the pages to fill to 80%
        • @DefragmentationType is either rebuild or reorganize
    • Fragmentation
      • Fragmentation in index pages is when the logical ordering(based on the key value) doesn’t match the physical ordering within the data file.
      • Fragmentation information is available in DMV sys.dm_db_index_physical_stats
      • Removing index fragmentation.
        • When fragmentation is bad, rebuild or reorganize the index.
          • They are
          • Indexes should be rebuilt when fragmentation is above 30 %
          • Indexes should be reorganized when the fragmentation is between 10 and 20 %.
          • Reorganize
            • Reorganize doesn’t update the statistics histogram for the object.
            • Reorganize is online and doesn’t hold any blocking locks
            • Requires 8Kb free space in the DB
            • Single threaded
          • Rebuild
            • Rebuild is used when heavier fragmentation levels exist or when out of order extents are limiting read-ahead operations.
            • Rebuild requires long-term locks and can cause blocking unless performed online
            • Builds a second copy of the index and requires space to hold this copy.
            • This can use parallel operations
            • Requires Enterprise edition to perform online Rebuild
              • Online rebuild cannot be performed on indexes with XML and spatial data types.
    • Tracking missing indexes
      • Each time query optimizer compiles, SQLEngine tracks 500 latest indexes that the optimizer could have used to satisfy the query more efficiently had they existed.
      • It keeps tracks of queries since the last SQLEngine Restart
      • They are calculated using 4 DMOs
        • sys.dm_db_missing_index_columns
          • lists the columns that would comprise the missing index
        • sys.dm_db_missing_index_details
          • list detail on the columns used for equality, inequality and included columns
        • sys.dm_db_missing_index_groups
          • A table that joins the above 2
        • sys.dm_db_missing_index_group_stats
          • Provides the metrics on a group of missing indexes
    • Reviewing unused indexes
      • Each time a table is InsUpDeled, all indexes for the table must be maintained to reflect those changes
      • If the cost for maintaining an index is more than the overhead of maintaining it, disable the index.
        • See the cost using this script.
i.name AS [IndexName],
i.type_desc AS [IndexType],
ius.user_updates AS [UserUpdates],
ius.last_user_update AS [LastUserUpdate]
FROM sys.indexes i
INNER JOIN sys.dm_db_index_usage_stats ius
ON ius.OBJECT_ID = i.OBJECT_ID AND ius.index_id = i.index_id
WHERE OBJECTPROPERTY(i.OBJECT_ID, ‘IsUserTable’) = 1 — User Indexes
AND NOT(user_seeks > 0 OR user_scans > 0 or user_lookups > 0)
AND i.is_primary_key = 0
AND i.is_unique = 0
ORDER BY ius.user_updates DESC, SchemaName, ObjectName, IndexName
  • Identifying and resolving concurrency problems
    • Concurrency refers to the availability of SQL Server to service requests from the maximum amount of users at the same time
    • Defining transactions and transaction scope
      • A transaction is a way to group, or batch a series of updates to a data so that either all updates succeed and are committed at once or if one fails, none are committed and the entire transaction is rolled back.
        • To preserve the consistency of the data involved in the batch of updates, locks are held on the rows that are being updated.
        • The length of time that these locks are held is know as the transaction scope.
        • Concurrency can be improved by ensuring transaction scopes are as short as possible.
      • SQLEngine will lock the minimum number of resources needed to accomplish its goal(multi granular locking).
        • To do this, it must announce its intent to take locks to higher levels.
          • To modify a row, an exclusive row must be obtained for the row, and an intending exclusive to the page and higher levels.
          • This is done for lock management. If another transaction wants a lock, SQLEngine checks locks from the highest level down to determine if the resource is locked.
      • Lock modes
        • Shared: For read operations that do not change or update data. E.g. select
        • Update: Used on resources that might be updated
        • Exclusive: Used of Data modification operations. Ensures multiple updates cannot be made to the same resource at the same time. E.g. Exclusive
        • Intent: Used to establish a lock hierarchy. The types are
          • intent shared(IS),
          • intent exclusive(IX)
          • shared with intent exclusive(SIX).
        • Schema: Used when an operation dependent on the schema of a table is executing. The types are schema modification(Sch-M) and schema stability(Sch-S)
        • Bulk update(BU): Used when bulk copying data into a table and the tablock hint is specified
        • Key-range: Protects the range of rows read by a query when using a serializable transaction isolation level. Ensures other transactions cannot insert rows that would qualify for the queries of the serializable transaction if the queries were run again.


      • Lock management
        • SQLEngine uses lock management to increase concurrency.
        • Query optimizer determines the scope of locks to be taken on a database at the beginning of the transaction.
        • SQLEngine can dynamically escalate the lock type during execution.
      • Transaction Isolation levels
        • There are 4 ANSI isolation levels
          • Serializable
          • Repeatable read
          • Read committed(default)
          • Read uncommitted
        • SQL server introduces 2 custom isolation level that use row versioning in tempdb
          • Snapshot Isolation
            • This improves concurrency and has the same characteristics as the serializable isolation level.
            • It specifies that the data read by any statement in a transaction will be the transactionally consistent version of the data that existed at the start of the transaction.
              • Data modification made by other transactions after the start of the current transaction are not visible to statements running in the current transaction.
            • Configured on the database and requires you to set the transaction isolation level at the beginning of the transaction
Alter Database @DBName set allow_snapshot_isolation on

Set transaction isolation level repeatable read;

          • Read Committed snapshot isolation
            • Similar to snapshot isolation level as it allows readers read data without being blocked by locks acquired by writers.
            • doesn’t require the user to ask for it explicitly as it is automatically applied whenever the transaction is at a read committed isolation level.
        • The characteristics of the isolation levels
          • Dirty read
            • When a transaction is allowed to read data from a row that has been modified by another running transaction that has not yet committed
          • Non-repeatable read:
            • Occurs when during the course of a transaction, a row is retrieved twice and the values within the row differ between reads
          • Phantom read:
            • Occurs when in the course of a transaction, two identical queries are executed, and the collection of rows returned by the second query is different from that returned by the first.

Image [1]

          • To query the isolation level of a transaction, use

DBCC useroptions

        • Lock duration is based on
          • The lock owner.
          • A server side cursor
          • The session

Image [2]

        • Monitoring locks
          • DMVs provide you information about locks and transactions currently in effect.
          • sys.dm_tran_locks displays the locks being held by open transactions
          • sys.dm_tran_database_transactions displays the open transactions at the DB level
          • sys.dm_tran_session_transactions correlates associated sessions and transactions
      • Blocking
        • Blocking occurs when transactions are waiting for a resource to be freed due to another transaction’s lock on it.
          • Can be identified when
            • request_status in Sys.dm_tran_locks DMV shows the value of wait.
            • If the blocking_session_id of sys.dm_exec_requests or sys.dm_tran_locks > 0, it indicates blocking
        • Blocking can also occur due to compiling or recompiling query execution plans.
          • Can be identified when
            • wait_resource in sys.dm_exec_requests contains compile
            • resource_subtype in sys.dm_tran_locks
        • Blocks can be avoided by
          • keeping transaction scope as short as possible
          • Not allowing user interactions during transactions.
          • Practicing proper indexing to limit locks acquired and reduce the chance of blocking
          • Elevating the transaction isolation level above the default only for a good reason
          • Examining the T-SQL code to see whether developers have added locking hints, index hints, or join hints
        • Using AlwaysOn Replicas to improve concurrency
          • Reporting queries can reduce concurrency in OLTP systems as they often carry out aggregation which blocks InsUpDel operations.
          • A new feature allows you offload reporting load to a read-only replica created with always on.
          • To do this, add a clause to the connection string ApplicationIntent=ReadOnly.
        • Deadlocks
          • Deadlocks occur when 2 or more tasks permanently block each other because each has a lock on a resource that the other tasks are trying to lock.
          • Resources that could cause a block are
            • Locks.
            • Worker Threads.
            • Memory.
            • Parallel query execution-related resources.
            • Multiple active result sets(MARS) resources.
          • Types of deadlocks
            • conversion deadlocks
              • They occur when two connections both own shared locks on a resource and they both try to convert their locks to exclusive locks.
            • writer-writer deadlocks
              • caused by resource ordering
              • Fixed by ensuring that all code access resources in the same order
            • Reader-writer deadlocks
              • Occur when both connections hold exclusive locks on different resources, and each tries to request a shared lock on the other connection’s resource
            • Cascading deadlocks
              • They occur when there are more than 2 connections involved in the deadlock. Even after a deadlock victim has been chosen, another deadlock still exists.
              • Deadlock victims will continually be chosen until all the deadlocks in the chain have been resolved.
          • SQL server has a thread just for looking for deadlocks
          • Deadlocks raise a 1205 error upon which one of the processes in the deadlock is terminated and rolled back.
            • The victim is usually the one with the fewest transaction log records
              • This behavior can be overridden by setting deadlock_priority on the connection
          • Capturing deadlock information
            • Using trace flags
              • Turn on trace flag 1204
                • Writes deadlock information into SQL server activity log in a text format
              • Turn on trace flag 1222
                • Writes the deadlock information to XML
              • The trace flags can be turned on using DBCC Traceon

DBCC Traceon(@TraceFlagNo, -1)

                • This allows you carry out tracing to the SQL server activity log without the overhead of a profiler trace
            • For more detailed information, use
              • SQL trace to capture
                • lock:deadlock chain
                • deadlock graph
                • lock:deadlock
              • Extended events
                • xml_deadlock_report events in the system_health session
      • Activity monitor
        • Used to monitor the health of an instance.
        • It has 4 views to show real time metrics
          • %Processor time
          • Waiting tasks
          • Database I/O
          • Batch requests per second
      • Diagnosing bottlenecks
        • Microsoft recommends using the waits and queues methodology to solve bottlenecks


          • First determine what is causing the bottleneck
            • Examine sys.dm_os_wait_stats so you can see the cumulative wait times for every possible wait type since SQL server was restarted.
              • To reset it run DBCC sqlperf(‘sys.dm_os_wait_stats’, clear);
            • Examining the wait type can reflect a bottleneck on the same resource.
            • A task is a unit of work scheduled for execution.
              • It can be in 1 of 3 stages
                • Running: currently executing
                • Runnable: It has the resources it needs and is waiting for a scheduler to execute
                • Suspended: Waiting for a resource
              • You monitor task status using sys.dm_os_waiting_tasks
                • The wait_duration_ms column shows how long
                • The wait_type shows what the wait type is for that session
SELECT wt.session_id,wt.wait_duration_ms,wait_type, blocking_session_id, status
FROM sys.dm_os_waiting_tasks AS wt
JOIN sys.dm_exec_sessions AS s ON wt.session_id = s.session_id
FROM sys.dm_os_waiting_tasks AS wt
JOIN sys.dm_exec_sessions AS s ON wt.session_id = s.session_id
FROM sys.dm_os_waiting_tasks AS wt
JOIN sys.dm_exec_sessions AS s ON wt.session_id = s.session_id
      • Using reports for performance analysis
        • The standard reports provide a lot of information in graphical format.
        • Custom reports can be added for additional reports access.
        • SQL server 2012 performance dashboard report shows a simple method of seeing current and historical reports for a database.
      • The Kill process
        • Used to remove a session that is not responsive or is the head blocker in a blocker chain.
        • Can be done using
          • Activity monitor: Select the process and click kill
          • TSQL:

Kill { @SessionID } [with statusonly]

            • the With statusonly option shows you the status of the kill operation as the DB is moved to a transactionally consistent state.
            • Kill cannot be used to kill your own user processes
SQL server agent, backup and restore
Managing SQL server Agent
  • Server Agent enables you automate execution of scheduled or event-driven administrative jobs.
    • A job is a collection of tasks(Job steps).
    • The Server agent account determines how the system interacts with resources(Both internal and external).
      • If The agent account is on the domain, it needs
        • Logon as a service right
        • membership of the access security group so jobs owned by domain users that aren’t in the host’s local admin group don’t fail.
        • be a member of sysadmin role to
      • To support SQL server agent proxies, the service account must have additional permissions
        • This allows users that aren’t part of sysadmin role create jobs
      • If you want to support multi-server job processing, you must add the account used for agent to the TargetServersRole in msdb on the master server.
    • Server agent security
      • If a user is not a member of the sysadmin role, he must be a member of either of the following roles to use server agent.
        • SQLAgentUserRole: Users have permissions only on the local jobs and job schedules they own. They cannot run multi-server jobs
        • SQLAgentReaderRole: Members have SQLAgentUserRole permissions and can view the properties of all jobs and job schedules including multi-server jobs
        • SQLAgentOperatorRole: Members can execute, enable, disable, stop or start all jobs and can delete the job history for any local job.
    • SQL server agent mail
      • Database mail is used to transmit alerts as email messages.
    • SQL server agent error log
      • This stores warnings and error messages from server agent.
    • SQL server can be configured to generate alerts when it detects events and trigger conditions
      • Generating an alert involves
        • Creating an alert name
        • Specifying an alert trigger
        • Specifying an alert action
    • Jobs are used to automate routine tasks.
      • They can be scheduled to run at particular times
      • They can be monitored with the job activity monitor.
      • Notifications can be sent to operators from SQL agent.
      • In multi server environments
        • Server are either targets or master servers.
          • Target servers report to master servers
        • Server agent’s service account should be on the domain.
        • To link a target to a master, on the target, run sp_msx_enlist, and to delist, run sp_msx_defect
  • Backup strategy
    • SQL server supports the following backup types
      • Full DB backups
        • Includes all DB objects, system tables and data.
        • Transactions that occur during the backup are also recorded
        • Allow you to completely restore a database.

Backup database @DBName to @Destination

      • Differential backups
        • Backs up data altered since the last full backup regardless of whether other differential backups have been taken.
        • Require less time than full DB backups
        • Transactions that occur during the backup are also recorded

Backup database @DBName to @Destination with differential

      • Transaction log backups
        • This records the changes that have occurred since the previous transaction log backup then truncates the transaction log thus removing transactions from the log that have been committed or rolled back.
        • They represent the state of the transaction log at the time the backup starts.
        • They are incremental backups and must be restored in sequence of being taken.

Backup log @DBName to @Destination with differential

      • File and filegroup backups
        • This backs up individual DB files and filegroups rather than performing a full DB backup.
        • This backs up very large databases
        • The transaction log must also be backed up
        • The truncate log on checkpoint must be disabled.
        • Useful for very large DBs
      • Copy only backups
        • Functionally the same as full DB or transaction log backups.
        • They do not affect the backup sequence
          • It doesn’t truncate the transaction log and is ignored by differential backups as a reference point.
    • You can use multiple backup types in a backup strategy.
    • Backup compression reduces the amount of space required to store a backup
      • CPU load is increased during the compression.
      • Resource governor can be used to limit CPU usage.
      • Compressed and uncompressed backups cannot be stored in the same media set.
    • Recovery models
      • A recovery model determines how you backup a database and the strategy you implement to meet your recovery point objective(RPO).
        • RPOs enable you measure the amount of data that might be lost in the event of failure. Often times the time referenced in SLAs
        • RTOs (Recovery Time Objective) measure the maximum amount of time it can take to restore data to the RPO.
      • There are 3 recovery models
        • Simple: No transaction log backups are taken. It is only possible to recover to the most recent DB backup only.
        • Full: This enables you take transaction log backups and full backups.
          • This allow you recover to the point of the last full/ transaction log backup.
          • Allow point-in-time recovery
        • Bulk-logged: a recovery model that minimizes transaction log activity during bulk operations. You cannot perform point-in-time recovery using this model
    • System DB backups
      • The preferred backup strategy for each system DB is
        • master: Frequently
        • model: only after modifications
        • msdb: Only after you make changes to jobs and alerts
        • tempdb: Not backupable
        • Resource DB:
          • Holds copies of all system objects
          • Cannot be backed up directly. Require a File-based backup. Should be backed up after patching or upgrading SQL server
        • Configure distribution.
          • Used when the instance is the distributor in a replication topology.
          • Should be backed up when the publisher and subscription DBs are backed up.
    • Backing up replicated DBs
      • When backing up the replicated DBs, you should also back up the system DBs used by the replication process.
      • The following strategies are required
        • At the publisher: master, msdb and publication
        • At the distributor: master, msdb and distribution
        • At the subscriber: master, msdb and subscription
      • When backing up databases that use snapshot and transactional replication, set the sync with the backup option on the publication and distribution databases.
        • This ensures transactions in the publication DB log are not truncated until they have been backed up at the distribution DB
    • For mirrored DBs, you can only back up the principal DB. On it, you cannot use the backup log with norecovery option
    • Backing up alwaysOn replicas
      • This depends on the automated backup preference setting applied at the availability group level
      • There are 4 options for availability group backup
        • Prefer secondary
          • Default
          • Backups occur only on secondary except when only the primary is online in which case it uses the primary
        • Secondary only
          • Backups occur only on secondary and do not occur if the primary is the only replica online
        • Primary
          • Backups should be taken on the primary replica.
          • This allows differential backups to be taken
        • Any Replica
          • Backups can occur on any replica in the availability group.
      • The backup preference can be changed using the alter availability group statement with the automated_backup_preference option

Alter availability group @AvailabilityGroupName set (automated_backup_preference = secondary_only)

      • Log backups and copy only backups are supported on replicas
        • You can only backup when the replica is in synchronized or synchronizing state
    • Database checkpoints
      • Checkpoints write all modified pages held in memory to disk then records that information in the transaction log.
      • checkpoints can be configured on the instance using sp_configure with the recovery interval (min) option.
    • Backup devices
      • To add a backup device, use the sp_addumpdevice SP

exec sp_addumpdevice ‘disk’, ‘@BAckupDeviceName’, ‘@directory\file.bak’

        • This allows you to backup to the device directly buy specifying @BAckupDeviceName

backup database @DBName to @BAckupDeviceName

    • Backing up media sets
      • A media set can include a single or multiple files OR tapes.
      • A media set can contain up to 32 backup devices.
    • Backup events are written to the Application event log.
      • Successful backup are written to the msdb.dbo.backupset table.
  • Restoring DBs
    • Restore options depend on the backups that exist
      • Full backups only: this allow you restore to the last time the backup was carried out
      • Full and differential backups: Allow you restore to the point where the differential base taken
        • Restore the full backup with the norecovery option
        • restore the differential with the recovery option.
      • Full and transactional log backups.
        • Backup the current transaction log with the no_truncate option.
        • restore the most recent full backup with the Norecovery option
        • restore transaction logs in sequence with the NoRecovery option till you get to the newest
        • Restore the newest transaction log with the recovery option
      • Full, differential and transaction logs
        • Backup the current transaction log with the no_truncate option.
        • restore the most recent full backup with the Norecovery option
        • restore the differential with the Norecovery option.
        • restore transaction logs in sequence with the NoRecovery option till you get to the newest
        • Restore the newest transaction log with the recovery option
    • File restores
      • If a DB contains multiple files or filegroups, and uses the full or bulk-logged recovery model, you can perform a file restore rather than a full restore.
      • Done if the volume that hosts a particular file fails or a file becomes corrupt while other files used by the DB do not have any problems.
      • You can perform an online restore only on enterprise editions.
        • It enables you restore files in secondary filegroups as long as the primary filegroup is online.
    • Page restores
      • Used to repair a small number of damaged pages in an otherwise healthy database.
      • If a DB file constantly suffers from damaged pages, the volume that hosts it is faulty.
      • Possible under the following conditions
        • The Database uses full or bulk-logged recovery
        • Only on read-write filegroups
        • Only Db pages can be restored.

Restore DB @DBName page=’@IDOfFileHostingPage:PageNo1, @IDOfFileHostingPage:PageNo2′ from @BackupFile with Norecovery.

        • Restore any logs taken after the backup of BackupFile with Norecovery option
        • Perform a log backup
        • Restore the log with the recovery option.
      • AlwaysOn Availability Groups and DB mirrors support automatic page repair. If an error occurs on the primary, it broadcasts a request to the secondary and retrieves the page.
        • To verify the status of automatic page repair
          • Always on: Query the sys.dm_hadr_auto_page_repair DMV
          • DB mirrors: Query sys.dm_db_mirroring_auto_page_repair DMC
    • Restoring a DB protected with transparent Data encryption
      • You can restore a DB With TDE as long as you have the certificate and private keys.
        • Include backing up the certificate and private keys in your backup scheme.
      • Before performing a restore on a system without the certificate, use the create certificate statement with the from file option.
    • Restoring system DBs
      • Model and msdb are restored like user DBs
      • Tempdb doesn’t need to be restored
      • Master:
        • Start the DB in single user mode( specify -m in the startup options of the service)
        • Run the restore DB command

restore database master for @Backup with replace

        • If it is unsuccessful, rebuild the system databases and restore recent copies of all databases

setup.exe /Q /Action=RebuildDatabase /InstanceName=MSSQLServer

    • Restoring replicated DBs
      • When you restore a publication, distribution or subscription DB, you must also restore the master and msdb Databases.
        • This ensures all the DBs are consistent as regards replication configuration and settings
    • Checking DB status
      • Run the following against master to see the DB status

select databasepropertyex(‘databasename’, ‘status’);

        • Possible statuses are
          • Online
          • Offline
          • Restoring
          • Recovering
          • Suspect: A recovery operation failed and the DB might be corrupt
          • Emergency: The DB is in a read-only state and accessible only to members of the sysadmin role.
      • You can check which users have access to a DB by running the following against master

select databasepropertyex(‘databasename’,’useraccess’)


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s