mail us  |  mail this page

products  |  company  |  support  |  training  |  contact us

ZYTRAX OPEN LOGO

Chapter 7 Replication & Referral

This chapter provides information about configuring LDAP systems for Replication, Referral and Aliases. Replication is an operational characteristic and is implemented through configuration options whereas Referrals may be generic (an operational characteristic) or explicit (using the referral ObjectClass) within a DIT. Whether an LDAP server follows referrals (known as chaining) or returns a referral is configured within the LDAP server. Additionally LDAP Browsers usually have the ability to be configured to automatically follow referrals or to display the Referral entries to enable editing. Aliases are included in this section for no very good reason other than they exhibit 'referral' like behavior in the sense that they can used to change the strict hierarchical flow of a DIT by causing a 'jump' to a user defined alternative (or aliased) entry which may exist in an apparently unrelated branch of the DIT. In a trivial sense referral can be viewed as an inter-LDAP server jump since its dereferencing uses a LDAP URI and alias can be viewed as an intra-LDAP Server jump since its dereferencing uses only a DN.

Contents

  1. 7.1 Replication and Referral Overview
  2. 7.2 Replication
    1. 7.2.1 OpenLDAP Replication
      1. 7.2.1.1 OpenLDAP syncrepl Style Replication
      2. 7.2.1.2 OpenLDAP syncrepl RefreshOnly
        1. Example Configuration using OLC
        2. Example Configuration using slad.conf
      3. 7.2.1.3 OpenLDAP syncrepl RefreshAndPersist
        1. Example Configuration using OLC
        2. Example Configuration using slad.conf
      4. 7.2.1.4 OpenLDAP syncrepl Multi-Master
        1. Example Configuration using OLC
        2. Example Configuration using slad.conf
      5. 7.2.1.5 OpenLDAP syncrepl SessionLog, Access Logs and Delta-sync
        1. Example Configuration using OLC
        2. Example Configuration using slad.conf
      6. 7.2.1.6 Syncing DIT before syncrepl Replication
    2. 7.2.2 ApacheDS Replication
  3. 7.3 Referrals
    1. 7.3.1 Deleting Referrals
    2. 7.3.2 Referral Chaining
  4. 7.4 Aliases
  5. 7.5 LDAP Proxy & Chaining
  6. 7.6 Historical - OpenLDAP slurpd Style Replication (Obsolete from 2.4)
    1. 7.6.1.1 Historical - OpenLDAP slurpd Replication Errors (Obsolete from 2.4)
    2. 7.6.1.2 Historical - Syncing DIT before slurp Replication (Obsolete from 2.4)

7.1 Replication and Referral Overview

One of the more powerful aspects of LDAP (and X.500) is the inherent ability within the standard to delegate the responsibility for maintenance of a part of the directory while continuing to see the directory as a consistent and coherent entity. Thus, a company directory DIT may create a delegation (a referral in the LDAP jargon) of the responsibility for a particular department's part of the overall directory (DIT) to that department's LDAP server. The delegated DIT is said to be subordinate to the DIT from which it was referred. And the DIT from which it was referred is said to be superior. In this respect LDAP delegation almost exactly mirrors DNS delegation for those familiar with the concept.

Unlike the DNS system, there is no option in the standards to tell the LDAP server to follow (resolve) a referral (there is a referenced RFC draft in various documents) - it is left to the LDAP client to directly contact the new server using the returned referral. Equally, because the standard does not define LDAP data organisation it does not contravene the standard for an LDAP server to follow (resolve) the referrals and some LDAP servers perform this function automatically using a process that is usually called chaining.

OpenLDAP takes a literal view of the standard and does not chain by default it always returns a referral. However OpenLDAP can be configured to provide chaining by use of the chain overlay.

The built-in replication features of LDAP allow one or more copies of a directory (DIT) to be slaved (or replicated) from a single master thus inherently creating a resilient structure. Version 2.4 of OpenLDAP introduced the ability to provide full N-Way Multi-Master configurations.

It is important, however, to emphasize the difference between LDAP and a transactional database. When an update is performed on a master LDAP enabled directory, it may take some time (in computing terms) to update all the slaves - the master and slaves may be unsynchronised for period of time.

In the LDAP context, temporary lack of DIT synchronisation is regarded as unimportant. In the case of a transactional database even a temporary lack of synchronisation, is regarded as catastrophic. This emphasises the differences in the characteristics of data that should be maintained in an LDAP enabled directory versus a transactional database.

The configuration of Replication (OpenLDAP and ApacheDS) and Referral is described further in this chapter and featured in the samples.

7.1.1 LDAP Referrals

Figure 7.1-1 below shows a search request with a base DN of dn: cn=cheri,ou=uk,o=grommets,dc=example,dc=com, to a referral based LDAP system, that results in a series of referrals to the LDAP2 and LDAP3 servers:

Referral response from LDAP

Figure 7.1-1 - Request generates referrals to LDAP2 and LDAP3

Notes:

  1. All client requests start at the global directory LDAP 1
  2. At LDAP 1, requests for any data with widgets as an RDN in the DN are satisfied immediately from LDAP1, for example:
    dn: cn=thingie,o=widget,dc=example,dc=com
    
  3. At LDAP 1 requests for any data with grommets as an RDN in the DN are referred to LDAP2, for example:
    dn: ou=us,o=grommets,dc=example,dc=com
    
  4. At LDAP 2, requests for any data with uk as an RDN in the DN are referred to LDAP3, for example:
    dn: cn=cheri,ou=uk,o=grommets,dc=example,dc=com
    
  5. If the LDAP server is configured to chain (follow the referrals as shown by the alternate dotted lines) then a single data response will be supplied to the LDAP client. Chaining is controlled by LDAP server configuration and by values in the search request. Information on chaining.

  6. The Figures illustrate explicit chaining using the referral ObjectClass, OpenLDAP servers may be configured to return a generic referral if the suffix of the search DN is not found in any DIT maintained by the server.

7.1.2 LDAP Replication

Replication features allow LDAP DIT updates to be copied to one or more LDAP systems for backup and/or performance reasons. In this context it is worth emphasizing that replication operates at the DIT level not the LDAP server level. Thus, in a single server running multiple DITs each DIT may be replicated to a different server. Replication occurs periodically within what this guide calls the replication cycle time. OpenLDAP version 2.3 introduced a powerful new replication feature (generically known as syncrepl) and with version 2.4 this was further enhanced to provide multi-master capabilities. There are two possible replication configurations and multiple variations on each configuration type.

Note: Pre version 2.4 OpenLDAP provided a replication feature which used a separate daemon know as slurpd. This feature now has historical only significance and its description is maintained solely for those still supporting legacy (pre 2.3) OpenLDAP systems. slurpd is not otherwise referenced.

  1. Master-Slave: In a master-slave configuration a single (master) DIT is capable of being updated and these updates are replicated or copied to one or more designated servers running slave DITs. The slave servers operate with read-only copies of the master DIT. Read-only users can access the servers containing the slave DITs but users who need to update the directory can only do so by accessing the server containing the master DIT. In order to confuse its poor users still further OpenLDAP has introduced the terms provider and consumer with the syncrepl replication feature. A provider may be viewed as the source (the provider) of replication services (what mere mortals would call a master) and a destination (or consumer) of replication services (what mere mortals would call a slave). Master-Slave (or provider-consumer) configurations have two obvious shortcomings:

    • Multiple locations. If all or most clients have the need to update the DIT then either they will have to access one server (running the slave DIT) for normal read access and another server (running the master DIT) to perform updates. Alternatively the clients can always access the server running the master DIT. In this latter case replication provides backup functionality only.

    • Resilience. Since there is only one server containing a master DIT it represents a single point of failure.

  2. Multi-Master: In a multi-master configuration one or more servers running master DITs may be updated and the resulting updates are propagated to the peer masters.

    Historically, OpenLDAP had not supported multi-master operation but with version 2.4 introduced a multi-master capability. In this context it may be worth pointing out two specific variations of the generic update-contention problem of all multi-master configurations identified by OpenLDAP and which are true for all LDAP systems and multi-master implementations:

    1. Value-contention If two attribute updates are performed at the same time (within the replication cycle time) with different values then, depending on the attribute type (SINGLE or MULTI-VALUED) the resulting entry may be in an incorrect or unusable state.

    2. Delete-contention If one user adds a child entry at the same time (within the replication cycle time) as another user deletes the original (parent) entry then the deleted entry will re-appear.

Figure 7.1-2 shows a number of possible replication configurations.

Replication Configurations

Figure 7.1-2 - Replication Configurations

Notes:

  1. RO = Read-only, RW = Read-Write
  2. LDAP1 Client facing system is a Slave and is read only. Clients must issue Writes to the Master.

  3. LDAP2 Client facing system is a Master and it is replicated to two slaves.

  4. LDAP3 is a Multi-Master and clients may issue reads (searches) and/or writes (modifies) to either system. Each master in this configuration could, in turn, have one or more slave DITs.

Up Arrow

7.2 Replication

Replication occurs at the level of the DIT and describes the process of copying updates from a DIT on one LDAP server to the same DIT on one or more other servers. Replication configurations may be either MASTER-SLAVE or Producer-Consumer in OpenLDAP's idiosyncratic terminology (the SLAVE - consumer - copy is always read-only) or MULTI-MASTER. Replication is a configuration (operational) issue, however, the Content Synchronization Protocol (used by syncrepl) is defined by RFC 4533.

7.2.1 OpenLDAP Replication

Since OpenLDAP version 2.4 there is only one method of replication - generically known as syncrepl after the Consumer (a.k.a Slave) attribute (OLC = olcSyncrepl) or directive (slapd.conf = syncrepl) which invokes replication. Legacy versions of OpenLDAP (pre version 2.4) provided slurpd style replication which is now firmly obsoleted.

7.2.1.1 OpenLDAP syncrepl Style Replication (from 2.3)

OpenLDAP version 2.3 introduced support for a new LDAP Content Synchronization protocol and from version 2.4 this has become the only replication capability supported. The LDAP Content Synchronization protocol is defined by RFC 4533 and generically known by the name of the Consumer's feature - olcSyncrepl/syncrepl - used to invoke replication. Syncrepl functionality provides both classic master-slave replication and since version 2.4 allows for multi-master replication. The protocol uses the terms provider (rather than master) to define the source of the replication updates and the term consumer (rather than slave) to define a destination for the updates.

In syncrepl style replication the consumer always initiates the update process. The consumer may be configured to periodically pull the updates from the provider (refreshOnly) or it may request the provider to push updates (refreshAndPersist). In all replication cases, in order to unambiguously refer to an entry the server must maintain a universally unique number (entryUUID) for each entry in a DIT. The process of synchronization is shown in Figure 7.2-2 (refreshOnly) and 7.2-3 (refreshAndPersist):

7.2.1.2 Replication refreshOnly (Consumer Pull)

syncrepl refreshonly replication

7.2-2 syncrepl Provider-Consumer Replication - refreshOnly

A slapd server (1) that wants to replicate a DIT (4) (the provider) to a slaved copy (5) (the consumer) is configured using olcSyncrepl(olc) or syncrepl (slapd.conf) (7). olcSyncrepl/syncrepl defines the location (the LDAP URI) of the slapd server (3) (the provider) containing the master copy of the DIT (4). The provider (3) is configured using the syncprov overlay.

In refreshOnly type of replication the consumer (1) initiates a connection (2) with the provider (3) - synchronization of DITs takes places and the connection is broken (8). Periodically, determined by user configuration, the consumer (1) re-connects (2) with the provider (3) and re-synchronizes. refreshOnly synchronization may be viewed as operating in burst mode and the replication cycle time is the time between re-connections.

More Detail: The consumer (1) opens a session (2) with the provider (3) and requests refreshOnly synchronization. Any number of consumers can contact the provider and request synchronization service - the provider is not configured with any information about its consumer(s) - as long as a consumer satisfies the security requirements of the provider its synchronization requests will be executed. The synchronization request is essentially an extended LDAP search operation which defines the replication scope - using normal LDAP search parameters (base DN, scope,search filter and attributes) - thus the whole, or part, of the providers DIT may be replicated depending on the search criteria.

The provider is not required to maintain per-consumer state information. Instead, at the end of a replication/synchronization session the provider sends a synchronization cookie (SyncCookie - D) - this cookie contains a Change Sequence Number (contextCSN) - essentially a timestamp indicating the last change sent to this consumer and which may be viewed as a change or synchronization checkpoint. When the consumer initiates a session it sends the last cookie (A) it received from the provider to indicate to the provider the limits of this synchronization session. Depending on how the consumer was initialised the first time the consumer initiates communication it may not have a SyncCookie and thus the initial synchronization covers all records in the provider's DIT (within the synchronization scope). A happy byproduct of this process allows the consumer to start with an empty DIT.

The provider (3) for the DIT will respond using one or both of two phases. The first is the present phase (B) and indicates those entries which are still in the DIT (or DIT fragment) and consists of:

  1. For each entry that has changed (since the last synchronization) - the complete entry is sent including its DN and its UUID (entryUUID). The consumer can reliably update its entries from this data.

  2. For each entry that has NOT changed (since the last synchronization) an empty entry with its DN and UUID (entryUUID) is sent.

  3. No messages are sent for entries which have been deleted. Theoretically, at the end of the two previous processes the consumer may delete entries not referenced in either.

In the delete phase (C):

  1. The provider returns the DN and UUID (entryUUID) for each entry deleted since the last synchronization. The consumer can reliably delete these entries.

Whether both phases are required is determined by a number of additional techniques.

At the end of the synchronization phase(s) the provider sends a SyncCookie (the current contextCSN - D) and terminates the session (8). The consumer saves this SyncCookie and when it initiates another synchronization session (at a time defined by the interval parameter of its olcSyncrepl/syncrepl definition) it will send the last received SyncCookie to limit the scope of the subsequent synchronization session.

Configuration samples are included with refreshAndPersist below.

7.2.1.3 Replication refreshAndPersist (provider Push)

syncrepl refreshandpersist replication

7.2-3 syncrepl Provider-Consumer Replication - refreshAndPersist

A slapd server (1) that wants to replicate a DIT (4) (the provider) to a slaved copy (5) (the consumer) is configured using olcSyncrepl(olc) or syncrepl (slapd.conf) (7). olcSyncrepl/syncrepl defines the location (the LDAP URI) of the slapd server (3) (the provider) containing the master copy of the DIT (4). The provider (3) is configured using the syncprov overlay.

In refreshAndPersist type of replication the consumer (1) initiates a connection (2) with the provider (3) - synchronization of DITs takes places immediately and at the end of this process the connection is maintained (it persists). Subsequent changes (E) to the provider (3) are immediately propagated to the consumer(1).

More Detail: The consumer (1) opens a session (2) with the provider (3) and requests refreshAndPersist synchronization. Any number of consumers can contact the provider and request synchronization service - the provider is not configured with any information about its consumer(s) - as long as any consumer satisfies the security requirements of the provider its synchronization requests will be executed. The synchronization request is essentially an extended LDAP search operation which defines the replication scope - using normal LDAP search parameters (base DN, scope,search filter and attributes) - thus the whole, or part, of the providers DIT may be replicated depending on the search criteria.

The provider (3) is not required to maintain per-consumer state information. Instead the provider periodically sends a synchronization cookie (SyncCookie - D) - this cookie contains a Change Sequence Number (contextCSN) - essentially a timestamp indicating the last change sent to this consumer and which may be viewed as a change or synchronization checkpoint. When a refreshAndPersist consumer (1) opens a session (2) with a provider (3) they must first synchronize the state of their DIT (or DIT fragment). Depending on how the consumer was initialised, when it initially opens a session (2) with the provider (3) it may not have a SyncCookie and therefore the scope of the changes is the entire DIT (or DIT fragment). A happy byproduct of this process allows the consumer to start with an empty replication DIT. When a consumer (1) subsequently connects (2) to the provider (3) it will have a SyncCookie (D). In the case of a refreshAndPersist type of replication re-connection will occur after a failure of the provider, consumer or network each of which will terminate a connection which otherwise is maintained permanently.

During the initial synchronization process the provider (3) for the DIT will respond with one or both of two phases. The first is the present phase (B) and indicates those entries which are still in the DIT (or DIT fragment) and consists of:

  1. For each entry that has changed (since the last synchronization) - the complete entry is sent including its DN and its UUID (entryUUID). The consumer can reliably update its entries from this data.

  2. For each entry that has NOT changed (since the last synchronization) an empty entry with its UUID (entryUUID) is sent.

  3. No messages are sent for entries which have been deleted. Theoretically, at the end of the two previous processes the consumer may delete entries not referenced in either.

In the delete phase (C):

  1. The provider returns the DN and UUID (entryUUID) for each entry deleted since the last synchronization. The consumer can reliably delete these entries.

Whether both phases are required is determined by a number of additional techniques.

At the end of the synchronization phase(s) (D) the provider typically sends a SyncCookie (the current contextCSN) and MAINTAINS the session. Subsequent updates (E) - which may be changes, additions or deletions - to the provider's DIT (4) will be immediately sent (E) by the provider (3) to the consumer (1) where the replica DIT (5) can be updated. Changes or additions result in the complete entry (including all attributes) being transferred and the SyncCookie (D) is periodically updated as well. The provider DIT (4) and consumer DIT (5) are maintained in synchronization with the replication cycle time approaching the transmission time between provider and consumer.

syncrepl Configuration Examples:

The difference between configuring refreshOnly and refreshAndPersist is so trivial they have been combined with the single difference being clearly identified.

syncrepl using OLC (cn=config):

Master (Provider) Configuration: The syncprov overlay needs to be added as a child entry of the olcDatabase definition for the DIT we require to replicate. If we assume, for the sake of simplicity, that the DIT we wish to replicate is defined by olcDatabase={1}bdb,cn=config then the following LDIF will add the syncprov overlay definition as a child entry:

dn: olcOverlay=syncprov, olcDatabase={1}bdb,cn=config
objectclass: olcSyncProvConfig
olcOverlay: syncprov
olcSpCheckpoint: 100 10

Notes:

  1. Only the olcOverlay attribute is mandatory (MUST) to create this entry. All other attributes can be added subsequently if that makes more sense. Full list of olcSyncProvConfig attributes. This overlay specific objectClass has a SUPerior of olcOverlayConfig - full list of olcOverlayConfig attributes.

  2. When the child entry has been created it will have (assuming this is the first overlay) a dn of olcOverlay={0}syncprov,olcDatabase={1}bdb,cn=config. The {0} was allocated when the entry was added. If you really, really want to know what those {} (braces) mean.

  3. Use of LDIF is one of many ways to add to the OLC DIT. Alternatives include any decent general-purpose LDAP browser and some tailored OLC (cn=config) utilities which are starting to emerge.

  4. If the syncprov overlay is built to be dynamically loaded (Linux distribution packaging policies differ wildly on this issue) then the appropriate entry in the cn=module{0},cn=config must be made before adding the syncprov overlay child entry. Add/Delete modules using OLC.

  5. The general organization of OLC (cn=config) is described separately. Addition of overlays in general is further described.

Slave (Consumer) Configuration: The olcSyncrepl attribute needs to be added to the olcDatabase definition for the DIT containing the replica. Clearly, the olcDatabase entry into which we will replicate must already exist. The first LDIF creates the olcDatabase entry if this does not exist. The second LDIF simply adds the olcSynrepl attribute to an existing entry.

Create the olcDatabase entry if it does not exist:

# order value {} not present so entry will be appended 
# to current children
# the olcDatabase type should be changed to reflect 
# the actual type being used and the
# objectclass: olcBdbConfig changed appropriately

dn: olcDatabase=bdb,cn=config
objectClass: olcBdbConfig
olcDatabase: bdb
olcDbDirectory: /var/db/new-db
olcSuffix: dc=example,dc=com

Notes:

  1. Full list of olcBdbConfig attributes. This database specific (bdb) objectClass has a SUPerior of olcDatabaseConfig - full list of olcDatabaseConfig attributes.

  2. Both olcDatabase and olcDbDirectory are mandatory (MUST) attributes and in the case of olcDbDirectory the directory must exist, with appropriate permissions, before running this LDIF.

  3. The olcSuffix attribute is not mandatory but the entry will croak without it - go figure.

  4. Additional attributes may be added when the entry is created or at any time subsequently including such obvious ones as olcRootDn and olcRootPw.

  5. If the bdb database is built to be dynamically loaded (Linux distribution packaging policies differ wildly on this issue) then the appropriate attribute in cn=module{0},cn=config must be made before adding this database entry. Add/Delete modules using OLC.

  6. The general organization of OLC (cn=config) is described separately. Addition of databases in general is further described.

We assume that the replication DIT is defined by the entry olcDatabase={1}bdb. Adding the olcSyncrepl attribute is shown in the following LDIF:

# NOTE: the items beginning # below are explanatory 
# comments and must be removed before 
# running this LDIF

dn: olcDatabase={1},cn=config
changetype: modify
add:olcSyncrepl
olcSyncrepl: {0}rid=000 
  provider=ldap://master-ldap.example.com
# replace with type=refreshonly if this style is preferred
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
# both user (*) and operational (+) attributes required
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
# warning: password sent in clear - insecure
  credentials=dirtysecret

Notes:

  1. The value {0} on the line olcSyncrepl: {0}rid=000 is not strictly necessary if this is the first olcSyncrepl attribute and could be omitted (it will be allocated the value {0} when added). However, if a subsequent olcSyncrepl attribute is added without specifying its order value then it will be rejected rather than being allocated a sequential number. If you really, really want to know what those {} (braces) mean.

  2. The olcUpdateref attribute (added to the global cn=config entry) is optional and may be used to refer any client which attempts a write (modify) operation to a slave (consumer - always read-only) to an appropriate master (provider).

syncrepl refreshAndPersist using slapd.conf:

This example configures replication using the slapd.conf file. Master slapd.conf configuration (assumed host name master-ldap.example.com):

# slapd provider (master)
# global section
...

# database section
database bdb
...
# allows read access from consumer
# may need merging with other ACL's
# referenced dn.base must be same as binddn= of consumer
# 
access to *
     by dn.base="cn=admin,ou=people,dc=example,dc=com" read
     by * break 
		 
# NOTE: 
# the provider configuration contains no reference to any consumers

# define as provider using the syncprov overlay
# (last directives in database section)
overlay syncprov
# contextCSN saved to database every 100 updates or ten minutes
syncprov-checkpoint 100 10

consumer slapd.conf configuration:

# slapd consumer (slave)
# global section


# database section
database bdb
...

# provider is ldap://master-ldap.example.com:389,
# whole DIT (searchbase), all user attributes synchronized
# simple security with a cleartext password
# NOTE: comments inside the syncrepl directive are rejected by OpenLDAP
#       and are included only for further explanation. They MUST NOT
#       appear in an operational file
syncrepl rid=000 
  provider=ldap://master-ldap.example.com
# replace with type=refreshonly if this style is preferred
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
# both user (*) and operational (+) attributes required
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
# warning: password sent in clear - insecure
  credentials=dirtysecret

Note: The Updateref directive is optional and may be used to refer a client which attempts a write (modify) to a slave (consumer - always read-only) to an appropriate master (provider).

Up Arrow

7.2.1.4 OpenLDAP syncrepl (N-Way) Multi-Master

OpenLDAP 2.4 introduced N-way Multi-Master support. In N-Way Multi-Master configurations any number of masters may be synchronized with one another. The functionality of replication has been previously described for refreshOnly and refreshAndPersist and is not repeated here. The following notes and configuration examples are unique to N-Way Multi-Mastering.

Note: When running N-Way Multi-Mastering it is vital that the clocks on all the master (providers) are synchronized to the same time source, for example, they should all run NTP (Network Time Protocol).

In N-Way Multi-Mastering each provider of synchronization services is also a consumer of synchronization services as shown in Figure 7.2-4:

syncrepl N-Way Multi-Master replication

Figure 7.2-4: syncrepl N-Way Multi-Mastering

Figure 7.2-4 shows a 3-Way Multi-Master (1, 2, 3) Configuration. Each Master is configured (5, 6, 7) as a provider (using the syncprov overlay) and as a consumer for all of the other masters (using olcSyncrepl/syncrepl). Each provider must be uniquely identified using olcServerID/ServerID. Each provider should be, as noted above, synchronized to a common clock source. Thus, the provider (1) of DIT (4) contains a configured syncprov overlay (the provider overlay) and two refreshAndPersist type olcSyncrepl/syncrepl definitions, one for each of the other providers (2, 3) as shown by the blue communication links. Similarly, each of the other providers has an equivalent configuration - a single provider capability and refreshAndPersist olcSyncrepl/syncrepl definitions for the other two masters (providers).

In this configuration, assuming that a refreshAndPersist type of synchronization is used (it is not clear why you would even want to think about using refreshOnly but it is possible), then a write (modify) to any master will be immediately propagated to all the other masters (providers) acting in their slave (consumer) role.

Version 2.4 of N-Way Multi-Master replication does not currently support delta synchronization.

syncrepl N-Way Multi-Master Configuration Examples:

syncrepl N-Way Multi-Master using OLC (cn=config):

Recall that each DIT will act as both a Master (provider) and a Slave (consumer) for all the other servers in the 3-way (in this example case) multi-master configuration. Assume our 3 servers are ldap1.example.com, ldap2.example.com and ldap3.example.com (oh, the imagination).

Master (Provider) Configuration: The syncprov overlay needs to be added as a child entry of the olcDatabase definition for the DIT we require to create as an N-way master (provider). If we assume, for the sake of simplicity, that all the DITs are defined as olcDatabase={1}bdb,cn=config then the following LDIF (executed on each server) will add the syncprov overlay as a child entry to each of the masters:

dn: olcOverlay=syncprov, olcDatabase={1}bdb,cn=config
objectclass: olcSyncProvConfig
olcOverlay: syncprov
olcSpCheckpoint: 100 10

Notes:

  1. It is assumed that the olcDatabase entry already exists - if this is not the case then use this procedure.

  2. Only the olcOverlay attribute is mandatory (MUST) to create this child entry. All other attributes can be added subsequently if that makes more sense. Full list of olcSyncProvConfig attributes. This overlay specific objectClass has a SUPerior of olcOverlayConfig - full list of olcOverlayConfig attributes.

  3. When the child entry has been created it will have (assuming this is the first overlay) a dn of olcOverlay={0}syncprov,olcDatabase={1}bdb,cn=config. The {0} was allocated automatically when the entry was added. If you really, really want to know what those {} (braces) mean.

  4. Use of LDIF is one of many ways to add to the OLC DIT. Alternatives include any decent general-purpose LDAP browser and some tailored OLC (cn=config) utilities which are starting to emerge.

  5. If the syncprov overlay is built to be dynamically loaded (Linux distribution packaging policies differ wildly on this issue) then the appropriate attribute in the cn=module{0},cn=config must be made before adding the syncprov overlay child entry.

  6. The general organization of OLC (cn=config) is described separately. Addition of overlays in general is further described.

Slave (consumer) Configuration: again recall that each of our servers will be both a Master (provider) and a Slave (consumer) for each of the 3 masters. The following LDIF file modifies ldap1.example.com:

dn: cn=config
changetype: modify
add: olcServerId
olcServerId: 1

dn: olcDatabase={1}bdb,cn=config
changetype: modify
add: olcSyncrepl
olcsyncrepl: {0}rid=000 
  provider=ldap://ldap2.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
  credentials=dirtysecret
olcsyncrepl: {1}rid=001
  provider=ldap://ldap3.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
  credentials=dirtysecret
-
add: olcAccess
olcAccess: {x}to *
  by dn.base="cn=admin,ou=people,dc=example,dc=com" read
  by * break
-
add: olcDbIndex
olcDbIndex: entryUUID eq
olcDbIndex: entryCSN eq
-
replace: olcMirrorMode
olcMirrorMode: TRUE

Notes:

  1. olcServerID is always defined in the cn=config entry (it is an attribute of the olcGlobal objectClass.

  2. The olcDbIndex attributes are optional but may help speed up processing at the expense of index overheads.

  3. The olcAccess attribute is required by each of the providers to authorize reading the appropriate sections of the DIT (in this case the whole DIT). It is shown with the order-value {x} to indicate you must select an appropriate value to merge it into the correct sequence of any pre-existing olcAccess attributes. If this is the first or only olcAccess attribute you can either use the value {0} or omit it entirely in which case it will be allocated the value {0} when added. If adding this attribute will necessitate a lot of shuffling around of existing olcAccess attributes then a crude solution (if it does not break any security policies) would be to give it globals scope by adding it to the cn=config entry in which case it would be moved up under the dn: cn=config part of the LDIF.

  4. Since the masters all replicate the same DIT the binddn is shown as having the same value throughout. This is perfectly permissible. However, if any single server is subverted - all are subverted. A safer strategy may be to use a unique binddn parameter for each server. This will require changes in the olcSyncrepl and olcAccess attributes.

  5. Each rid parameter of the olcSyncrepl directive must be unique within the cn=config DIT (within the server). The olcServerId value must be unique to each server (accross all servers). There is no relationship between rid and olcServerId values.

  6. The attribute olcMirrormode: true is required (confusingly) for multi-master configurations. Omitting this attribute in any master (provider) configuration will cause all updates to fail.

  7. The same LDIF can be used for all the servers in the 3-way set. The olcServerId attribute must be changed to reflect the unique server number (say, 2 and 3) and the olcSynrepl provider= line changed to reflect the relevant LDAP servers acting as Masters (providers) for each Slave (consumer) role.

  8. The state of each DIT is not important when any server is configured to take part in a N-way multi-master configuration, indeed one or more of the servers in this 3-way example could have empty DITs. The initial connection in any consumer-provider will be to perform an initial synchronization. See Syncrepl refreshAndPersist above.

  9. Multi-mastering requires clock synchronization between all the servers. Each server should be an NTP client and all servers should point to the same clock source. It is not enough to use a command such as ntpdate on server load or a similar technique since individual server clock drift can be surprisingly large.

  10. Update contention is one of many problems encountered in multi-master replication. OpenLDAP uses a timestamp to perform this function. Thus, if updates are performed to the same attribute(s) at roughly the same time (within the propagation time difference) on separate servers then one of the updates will be lost for the attribute(s) in question. The one that is lost will have a lower timestamp value - the difference need only be milliseconds. This is an unavoidable by-product of multi-mastering. NTP will minimize the occurrence of this problem.

syncrepl N-Way Multi-Master using slapd.conf:

Assume three masters (ldap1.example.com, ldap2.example.com and ldap3.example.com) using syncrepl N-Way multi-master - then the three masters would have slapd.conf files as shown:

slapd.conf for ldap1.example.com:

# slapd master ldap1.example.com
# global section
...
# uniquely identifies this server
serverID 001
# database section
database bdb
...
# allows read access from all consumers
# and assumes that all masters will use a binddn with this value
# may need merging with other ACL's
access to *
     by dn.base="cn=admin,ou=people,dc=example,dc=com" read
     by * break 
		 
# NOTE: 
# syncrepl directives for each of the other masters
# provider is ldap://ldap2.example.com:389,
# whole DIT (searchbase), all user and operational attributes synchronized
# simple security with cleartext password
syncrepl rid=000 
  provider=ldap://ldap2.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
  credentials=dirtysecret

# provider is ldap://ldap3.example.com:389,
# whole DIT (searchbase), user and operational attributes synchronized
# simple security with cleattext password
syncrepl rid=001
  provider=ldap://ldap3.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
  credentials=dirtysecret
...
# syncprov specific indexing (add others as required)
index entryCSN eq
index entryUUID eq 
...
# mirror mode essential to allow writes
# and must appear after all syncrepl directives
mirrormode TRUE

# define the provider to use the syncprov overlay
# (last directives in database section)
overlay syncprov
# contextCSN saved to database every 100 updates or ten minutes
syncprov-checkpoint 100 10

slapd.conf for ldap2.example.com:

# slapd master ldap2.example.com
# global section
...
# uniquely identifies this server
ServerID 002
# database section
database bdb
...
# allows read access from all consumers
# and assumes that all masters will use a binddn with this value
# may need merging with other ACL's
access to *
     by dn.base="cn=admin,ou=people,dc=example,dc=com" read
     by * break 
		 
# NOTE: 
# syncrepl directives for each of the other masters
# provider is ldap://ldap1.example.com:389,
# whole DIT (searchbase), user and operational attributes synchronized
# simple security with a cleartext password
syncrepl rid=000 
  provider=ldap://ldap1.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
  credentials=dirtysecret

# provider is ldap://ldap3.example.com:389,
# whole DIT (searchbase), all user attributes synchronized
# simple security with a cleartext password
syncrepl rid=001
  provider=ldap://ldap3.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
  credentials=dirtysecret
...
# mirror mode essential to allow writes
# and must appear after all syncrepl directives
mirrormode TRUE

# syncprov specific indexing (add others as required)
index entryCSN eq
index entryUUID eq 
...
# define the provider to use the syncprov overlay
# (last directives in database section)
overlay syncprov
# contextCSN saved to database every 100 updates or ten minutes
syncprov-checkpoint 100 10

slapd.conf for ldap3.example.com:

# slapd master ldap3.example.com
# global section
...
# uniquely identifies this server
ServerID 003
# database section
database bdb
...
# allows read access from all consumers
# and assumes that all masters will use a binddn with this value
# may need merging with other ACL's
access to *
     by dn.base="cn=admin,ou=people,dc=example,dc=com" read
     by * break 
		 
# NOTE: 
# syncrepl directives for each of the other masters
# provider is ldap://ldap1.example.com:389,
# whole DIT (searchbase), user and operational attributes synchronized
# simple security with a cleartext password
syncrepl rid=000 
  provider=ldap://ldap1.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
  credentials=dirtysecret

# provider is ldap://ldap2.example.com:389,
# whole DIT (searchbase), all user attributes synchronized
# simple security with a cleartext password
syncrepl rid=001 
  provider=ldap://ldap2.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
  credentials=dirtysecret

# syncprov specific indexing (add others as required)
index entryCSN eq
index entryUUID eq 
# mirror mode essential to allow writes
# and must appear after all syncrepl directives
mirrormode TRUE

# define the provider to use the syncprov overlay
# (last directives in database section)
overlay syncprov
# contextCSN saved to database every 100 updates or ten minutes
syncprov-checkpoint 100 10

Notes:

  1. Since the masters all replicate the same DIT the binddn is shown as having the same value throughout. This is perfectly permissible. However if any single server is subverted - all are subverted. A safer strategy may be to use a unique binddn entry for each server. This will require changes in the syncrepl and access to directives.

  2. Each rid parameter of the syncrepl directive must be unique within the slapd.conf file (the server). The serverid value must be unique to each server. There is no relationship between rid and serverid values.

  3. The mirrormode true directive is required (confusingly) for multi-master configurations. It must appear after all the syncrepl directives in the database section. Omitting this directive in any master configuration will cause all updates to fail.

  4. Multi-mastering requires clock synchronization between all the servers. Each server should be an NTP client and all servers should point to the same clock source. It is not enough to use a command such as ntpdate on server load or a similar technique since individual server clock drift can be surprisingly large.

  5. Update contention is one of many problems encountered in multi-master replication. OpenLDAP uses a timestamp to perform this function. Thus, if updates are performed to the same attribute(s) at roughly the same time (within the propagation time difference) on separate servers then one of the updates will be lost for the attribute(s) in question. The one that is lost will have a lower timestamp value - the difference need only be milliseconds. This is an unavoidable by-product of multi-mastering. NTP will minimize the occurrence of this problem.

Up Arrow

7.2.1.5 Session Logs, Access Logs and Delta-sync

It has all been simple up until this point. Now it gets a bit messy. All in the interests of reducing traffic flows between provider and consumer.

refreshOnly synchronization can have a considerable time lag before update propagation - depending on the interval parameter of the olcSyncrepl/syncrepl definition. As the time interval between updates is reduced (to minimise propagation delays) it effectively approaches refreshAndPersist but it incurs an initial synchronization on every re-connection. If the present phase is needed during synchronization then even if no changes have been made since the last synchronization, every unchanged entry will result in an empty entry (with its identifying entryUUID) which can take a considerable period of time.

In refreshAndPersist mode re-synchronization only occurs on initial connection (re-connection only occurs after a major error to the provider, consumer or network). However, even in this mode updates to any attribute in an entry will cause the entire entry to transferred. For very big entries when a trivial change is made this can cause an unacceptable overhead.

Finally, pathological LDAP implementations can create update problems. As a worst case assume an application is run daily that has the effect of changing a 1 octet value in every DIT entry (say, the day number). The net effect would be to transfer the entire DIT to all consumers - perhaps a tad inconvenient or perhaps even impossible within a 24 hour period.

OpenLDAP provides two methods to ameliorate these problems: the session log and the access log. The objective of both methods is to minimise data transfer and in the case of Access Logs to provide what is called delta-synchronization (only transferring entry changes not whole entries).

Session Logs

The syncprov overlay has the ability to maintain a session log. The session log parameter takes the form:

olcSpSessionlog: ops # OLC (cn=config
syncprov-sessionlog ops # slapd.conf
# where ops defines the number of
# operations that can be stored
# and is wrapped when full
# NOTE: version 2.3 showed a sid parameter
#       which was removed in 2.4

# LDIF to add a syncprov child entry
# modify olcDatabase={1}bdb to suit configuration

dn: olcOverlay=syncprov, olcDatabase={1}bdb,cn=config
objectclass: olcSyncProvConfig
olcOverlay: syncprov
olcSpSessionlog: 100
olcSpCheckpoint: 100 10

# example of slapd.conf syncprov definition
# including a session log of 100 entries (changes)
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100

The session log is memory based and contains all operations performed (except add operations). Depending on the time period covered by the session log it may allow the provider to skip the optional present phase - thus significantly speeding up the synchronization process. If, for example, no changes have occurred in the session log since the last synchronization there is no need for a present phase. The session log may be used with any type (refreshOnly or refreshAndPersist) but is clearly most useful with refreshOnly. If the session log is not big enough to cover the period since the last consumer synchronization request then a full re-synchronization sequence (including the present phase) is invoked. No special olcSyncrepl/syncrepl parameters are required in the consumer when using the session log.

Access Log (Delta Synchronization)

accesslog provides a log of LDAP operations on a target DIT and makes them available in a related, but separate, accesslog DIT. The accesslog functionality is provided by the accesslog overlay.

In the normal case, replica synchronization performs the update using information in the DIT which is the subject of the search request contained within the Synchronization request (the initial consumer connection). Alternatively, the synchronization may be performed by targeting olcSyncrepl/syncrepl at the accesslog instead. Since the objects stored in the access log only contain the changes (including deletes, adds, renames and modrn operations) the volume of data is significantly lower than when performing a full synchronization operation against the main DIT (or even a DIT fragment) where, if any attribute is changed, the entire entry is transferred. Use of the accesslog is known as delta Replication or delta synchronization or even delta-syncrepl.

Use of this fairly complex configuration needs to be judged in the light of the operational details. In general, it is probably worth the investment if one or more of the following is true:

  1. Entry sizes are greater than, say, 20K as a fairly arbitrary size;

  2. Entry change volume is greater than, say, 20% of DIT per day or at any peak period AND the percentage of entry changed is less than, say, 50%;

  3. Network bandwidth is either limited or congested.

Use of this form of replication requires definition of an accesslog DIT in the provider and the use of the logbase, logfilter and syncdata parameters of olcSyncrepl/syncrepl in the consumer as shown in the examples below:

Delta Replication (Accesslog) Examples:

Delta Replication (Accesslog) using OLC

This example assumes a Master (provider) of ldap1.example.com and a Slave (consumer) of ldap2.example.com. It could, of course, be equally applicable in a N-way multi-master configuration at the expense of some added complexity (though it is not - as of 2.4.35 - currently supported for N-Way multi-mastering).

Master (Provider) Configuration:

The following LDIF enhances an existing configuration which is assumed to have a olcDatabase={1}bdb,cn=config entry (change the {Z} value or database type as appropriate). If the database/DIT does not already exist use this procedure to add it. A number of comments are included in the LDIF for explanatory purposes, they can be left in or omitted as desired:

# slapd provider ldap1.example.com
# global entry
# allow read access to target and accesslog DITs
# to consumer. This form applies a global access rule
# the DN used MUST be the same as that used in the binddn 
# parameter of the olcSyncrepl attribute of the consumer

dn: cn=config
changetype: modify
add: olcAccess
olcAccess: {x} to *
     by dn.base="cn=admin,ou=people,dc=example,dc=com" read
     by * break 

# syncprov specific indexing (add others as required)
# not essential but improves performance

dn: olcDatabase={1}bdb,cn=config
changetype: modify
add: olcDbIndex
olcDbIndex: entryCSN eq
olcDbIndex: entryUUID eq


# define access log overlay entry and attributes
# prunes the accesslog every day:
# deletes entries more than 2 days old
# log writes (covers add, delete, modify, modrdn)
# log only successful operations
# log has base suffix of cn=deltalog

dn: olcOverlay={0}accesslog,olcDatabase={1}bdb,cn=config
objectclass: olcAccessLogConfig
olcOverlay: {0}accesslog
olcAccessLogDb: cn=deltalog
olcAccessLogOps: writes
olcAccessLogSuccess: TRUE
olcAccessLogPurge: 2+00:00 1+00:00


# add the syncprov overlay as a child entry
# (last child entry)
# contextCSN saved to database every 100 updates or ten minutes

dn: olcOverlay={1}syncprov, olcDatabase={1}bdb,cn=config
objectclass: olcSyncProvConfig
olcOverlay: {1}syncprov
olcSpCheckpoint: 100 10

# now create the accesslog DIT entry
# normal database definition

dn: olcDatabase={2}bdb,cn=config
objectClass: olcBdbConfig
olcDatabase: bdb
olcSuffix: cn=deltalog
olcDbDirectory: /var/db/delta
olcDbIndex: entryCSN,objectClass, reqEnd, reqResult, reqStart eq

# the access log is also a provider
# olcSpNoPresent inhibits the present phase of synchronization
# olcReloadHint TRUE mandatory for delta sync

dn: olcOverlay={0}syncprov, olcDatabase={2}bdb,cn=config
objectclass: olcSyncProvConfig
olcOverlay: {0}syncprov
olcSpCheckpoint: 100 10
olcSpNoPresent: TRUE
olcReloadHint: TRUE

Notes:

  1. Full list of olcAccessLogConfig attributes.

  2. The olcAccess attribute is shown be added to the global cn=config entry. If this is the first olcAccess attribute the {x} could be omitted or set to {0}. The olcAccess attribute could equally have been added (in the correct order) in the olcDatabase={1}bdb,cn=config entry.

  3. The database definition for the accesslog (cn=deltalog) in the provider does not contain olcRootDn or olcRootPw attributes since these are not required. There is further no olcAccess directive for this database which means the global one defined in cn=config is used. This user is defined to have minimal access to both the target and accesslog DITs at the expense of having to define a real entry in the target DIT. Using the olcRootDn and olcRootPw of the target DIT as a binddn within the consumer olcSyncrepl attribute would also work (assuming the global olcAccess attribute was removed and an appropriate olcAccess attribute added to the accesslog DIT definition) but this exposes high grade user to potential sniffing attacks and is not advised.

Slave (Consumer) Configuration:

The example assumes that the database entry currently exists with a dn of olcDatabase={1}bdb,cn=config - change as appropriate. If this is not the case use this procedure. The following LDIF simply adds the olcSyncrepl attribute to an existing configuration.


dn: olcDatabase={1}bdb,cn=config
changetype: modify
add: olcSyncrepl
olcSyncRepl: {0}rid=000 
  provider=ldap://ldap1.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
  credentials=dirtysecret
  logbase="cn=deltalog"
  logfilter="(&(objectClass=auditWriteObject)(reqResult=0))"
  syncdata=accesslog 

Notes:

  1. The logfilter search ("(&(objectClass=auditWriteObject)(reqResult=0))") reads all standard entries in the accesslog that were successful (reqResult=0) - since we defined the only entries to be logged as successful ones in the provider (olcAccessLogSuccess: TRUE) this is theoretically redundant but can do no harm.

  2. The consumer may be started with an empty DIT in which case normal synchronization occurs initially and when complete subsequent updates occur via the accesslog mechanism.

  3. When the accesslog transmits a change the new attribute value, entryCSN, modifiersName (DN) and the ModifyTimestamp are supplied to the consumer. The latter three attributes are all operational and required to provide a perfect replica.

Delta Replication (Accesslog) using slapd.conf

Provider configuration (assumed hostname of ldap1.example.com):

# slapd provider ldap1.example.com
# global section
...
# allow read access to target and accesslog DITs
# to consumer. This form applies a global access rule
# the DN used MUST be the same as that used in the binddn 
# parameter of the syncrepl directive of the consumer
access to *
     by dn.base="cn=admin,ou=people,dc=example,dc=com" read
     by * break 
...
# database section - target DIT
# with suffix dc=example,dc=com
database bdb
suffix "dc=example,dc=com"
...
# syncprov specific indexing (add others as required)
# not essential but improves performance
index entryCSN,entryUUID eq
...
# define access log overlay and parameters
# prunes the accesslog every day:
# deletes entries more than 2 days old
# log writes (covers add, delete, modify, modrdn)
# log only successful operations
# log has base suffix of cn=deltalog
overlay accesslog
logdb "cn=deltalog"
logops writes
logsuccess TRUE 
logpurge 2+00:00 1+00:00

# define the replica provider for this database
# (last directives in database section)
overlay syncprov
# contextCSN saved to database every 100 updates or ten minutes
syncprov-checkpoint 100 10

# now define the accesslog DIT 
# normal database definition
database bdb
...
suffix "cn=deltalog"
# these are recommended to optimize accesslog
index default eq
index entryCSN,objectClass,reqEnd,reqResult,reqStart 
...
# the access log is also a provider
# syncprov-nopresent inhibits the present phase of synchronization
# syncprov-reloadhint TRUE mandatory for delta sync
overlay syncprov
syncprov-nopresent TRUE
syncprov-reloadhint TRUE 

Consumer configuration:

# slapd master ldap3.example.com
# global section
...

# database section
database bdb
suffix "dc=example,dc=com"
...

# NOTE: 
# syncrepl directive will use the accesslog for 
# delta synchronization
# provider is ldap://ldap1.example.com:389,
# whole DIT (searchbase), all user attributes synchronized
# simple security with a cleartext password
# binddn is used to authorize access in provider
# logbase references logdb (deltalog) in provider
# logfilter allows successful add, delete, modify, modrdn ops
# syncdata defines this to use accesslog format
syncrepl rid=000 
  provider=ldap://ldap1.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="cn=admin,ou=people,dc=example,dc=com"
  credentials=dirtysecret
  logbase="cn=deltalog"
  logfilter="(&(objectClass=auditWriteObject)(reqResult=0))"
  syncdata=accesslog 
...

Notes:

  1. The logfilter search ("(&(objectClass=auditWriteObject)(reqResult=0))") reads all standard entries in the accesslog that were successful (reqResult=0) - since we defined the only entries to be logged as successful ones in the provider (logsuccess TRUE) this is theoretically redundant but can do no harm.

  2. The database definition for the accesslog (cn=deltalog) in the provider does not contain rootdn or rootpw directives since these are not required. There is further no access to directive which means the global one defined at the start of the provider slapd.conf file is used. This user is defined to have minimal access to both the target and accesslog DITs at the expense of having to define a real entry in the target DIT. Using the rootdn and rootpw of the target DIT as a binddn within the consumer syncrepl would also work (assuming the global access to directive was removed and an appropriate access to directive added to the accesslog DIT definition) but this exposes high grade user to potential sniffing attacks and is not advised.

  3. The consumer may be started with an empty DIT in which case normal synchronization occurs initially and when complete subsequent updates occur via the accesslog mechanism.

  4. When the accesslog is propagated the new attribute value, entryCSN, modifiersName (DN) and the ModifyTimestamp are supplied to the consumer. The latter three attributes are all operational and required to provide a perfect replica.

Up Arrow

7.2.1.6 Syncing DIT before syncrepl Replication

When initiating syncrepl replication there are two possible strategies:

  1. Do nothing. After configuring the consumer there is no need to do anything further. The initial synchronization request will synchronize the replica from an empty state. However, where the DIT is very large this may take an unacceptably long time.

  2. Load an LDIF copy of the replica from the provider using slapadd before starting the replication. Depending on how this is done the initial synchronization may be minimal or non-existent. The following shows such a process when using a provider running OpenLDAP 2.3+ and assumes that the provider has been configured for replication:

    1. Save an LDIF copy of the provider's DIT (using a Browser or even slapcat if using a BDB or HDB backend). There is no need to stop the provider since any inconsistencies during the saving process or between the state when the DIT was saved and loaded into the consumer will be resolved during initial synchronization.

    2. Move the LDIF file to the consumer location.

    3. Configure the consumer.

    4. Load the LDIF into the consumer using slapadd with the -w option to create the most up-to-date SyncCookie. Example:

      slapadd -l /path/to/provider/copy/ldif -w
      
    5. Start the consumer:

    6. Run a test transaction on either the provider (master/slave) or one of the providers (multi-master) and confirm that it has propagated.

  3. You may want to add olcReferral/referral directive and/or olcUpdateref/updateref in the case of a master/slave configuration.

Up Arrow

7.2.2 ApacheDS Replication

One day real soon now ™

Under Construction

Up Arrow

7.3 Referrals

Goto Arrow

Copyright © 1994 - 2014 ZyTrax, Inc.
All rights reserved. Legal and Privacy
site by zytrax
Hosted by super.net.sg
web-master at zytrax
Page modified: February 06 2014.

Contents

tech info
guides home
intro
contents
1 objectives
big picture
2 concepts
3 ldap objects
quickstart
4 install ldap
5 samples
6 configuration
7 replica & refer
reference
8 ldif
9 protocol
10 ldap api
operations
11 howtos
12 trouble
13 performance
14 ldap tools
security
15 security
appendices
notes & info
ldap resources
rfc's & x.500
glossary
ldap objects
change log

Creative Commons License
This work is licensed under a Creative Commons License.

If you are happy it's OK - but your browser is giving a less than optimal experience on our site. You could, at no charge, upgrade to a W3C STANDARDS COMPLIANT browser such as Firefox

web zytrax.com

Share Page

share page via facebook tweet this page submit page to stumbleupon submit page to reddit.com

Page Features

Page comment feature Send to a friend feature print this page Decrease font size Increase font size

RSS Feed Icon RSS Feed

Resources

Systems

FreeBSD
NetBSD
OpenBSD
DragonFlyBSD
Linux.org
Debian Linux

Applications

LibreOffice
OpenOffice
Mozilla
SourceForge
GNU-Free SW Foundation

Organisations

Open Source Initiative
Creative Commons

Misc.

Ibiblio - Library
Open Book Project
Open Directory
Wikipedia

SPF Resources

Draft RFC
SPF Web Site
SPF Testing
SPF Testing (member only)

Display full width page Full width page

Print this page Print this page

SPF Record Conformant Domain Logo