登录  
 加关注
   显示下一条  |  关闭
温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!立即重新绑定新浪微博》  |  关闭

SeaRiver Blog

实力才是你一生最好的依靠!

 
 
 

日志

 
 

Replication & Referral  

2010-07-23 09:27:50|  分类: LDAP |  标签: |举报 |字号 订阅

  下载LOFTER 我的照片书  |

Chapter 7 Replication & Referral

This chapter provides information about configuring LDAP systems for Replication and Referral. Replication is an operational characteristic and is implemented through configuration options whereas Referrals may be generic (an operational characteristic) or explicit (using the referral ObjectClass) within a DIT. Whether an LDAP server follows referrals (known as chaining) or returns a referral is configured within the LDAP server. Additionally LDAP Browsers usually have the ability to be configured to automatically follow referrals or to display the Referral entries to enable editing.

Contents

7.1 Replication and Referral Overview
7.2 Replication
7.2.1 OpenLDAP Replication
7.2.1.1 OpenLDAP slurpd Style Replication
7.2.1.1.1 OpenLDAP slurpd Replication Errors
7.2.1.2 OpenLDAP syncrepl Style Replication
7.2.1.2.1 OpenLDAP syncrepl RefreshOnly
7.2.1.2.2 OpenLDAP syncrepl RefreshAndPersist
7.2.1.2.3 OpenLDAP syncrepl Multi-Master
7.2.1.2.4 OpenLDAP syncrepl SessionLog, Access Logs and Delta-sync
7.2.2 ApacheDS Replication
7.3 Synching DIT before slurp Replication
7.3 Synching DIT before syncrepl Replication
7.4 Referrals
7.4.1 Referral Chaining

Replication and Referral Overview

One of the more powerful aspects of LDAP (and X.500) is the inherent ability within the standard to delegate the responsibility for maintenance of a part of the directory while continuing to see the directory as a consistent whole. Thus, a company directory DIT may create a delegation (a referral is the LDAP term) of the responsibility for a particular department's part of the overall directory (DIT) to that department's LDAP server. The delegated DIT is said to be subordinate to the DIT from which it was referred. And the DIT from which it was referred is said to be superior. In this respect LDAP delegation almost exactly mirrors DNS delegation.

Unlike the DNS system, there is no option in the standards to tell the LDAP server to follow (resolve) a referral (there is a referenced RFC draft in various documents) - it is left to the LDAP client to directly contact the new server using the returned referral. Equally, because the standard does not define LDAP data organisation it does not contravene the standard for an LDAP server to follow (resolve) the referrals and some LDAP servers perform this function automatically using a process that is usually called chaining.

OpenLDAP takes a literal view of the standard and does not chain by default it always returns a referral. However OpenLDAP can be configured to provide chaining by use of the overlay chain directive.

The built-in replication features of LDAP allow one or more copies of a directory (DIT) to be slaved (or replicated) from a single master thus inherently creating a resilient structure. Version 2.4 of OpenLDAP introduced the ability to provide full N-Way Multi-Master configurations.

It is important, however, to emphasize the difference between LDAP and a transactional database. When an update is performed on a master LDAP enabled directory, it may take some time (in computing terms) to update all the slaves - the master and slaves may be unsynchronised for period of time.

In the LDAP context, temporary lack of DIT synchronisation is regarded as unimportant. In the case of a transactional database even a temporary lack of synchronisation, is regarded as catastrophic. This emphasises the differences in the characteristics of data that should be maintained in an LDAP enabled directory versus a transactional database.

The configuration of Replication (OpenLDAP and ApacheDS) and Referral is described further in this chapter and featured in the samples.

7.1.1 LDAP Referrals

Figure 7.1-1 below shows a search request with a base DN of dc:cn=cheri,ou=uk,o=grommets,dc=example,dc=com, to a referral based LDAP system, that results in a series of referrals to the LDAP2 and LDAP3 servers:

Referral response from LDAP

Figure 7.1-1 - Request generates referrals to LDAP2 and LDAP3

Notes:

  1. All client requests start at the global directory LDAP 1
  2. At LDAP 1, requests for any data with widgets as an RDN in the DN are satisfied immediately from LDAP1, for example:
    dn: uid=cheri,ou=uk,o=widget,dc=example,dc=com  
  3. At LDAP 1 requests for any data with grommets as an RDN in the DN are referred to LDAP2, for example:
    dn: uid=cheri,ou=uk,o=grommets,dc=example,dc=com  
  4. At LDAP 2, requests for any data with uk as an RDN in the DN are referred to LDAP3, for example:
    dn: uid=cheri,ou=uk,o=grommets,dc=example,dc=com  
  5. If the LDAP server is configured to chain (follow the referrals as shown by the alternate dotted lines) then a single data response will be supplied to the LDAP client. Chaining is controlled by LDAP server configuration and by values in the search request. Information on chaining.

  6. The Figures illustrate explicit chaining using the referral ObjectClass, OpenLDAP servers may be configured to return a generic referral if the requested DN is not found during a search operation.

7.1.2 LDAP Replication

Replication features allow LDAP DIT updates to be copied to one or more LDAP systems for backup and/or performance reasons. In this context it is worth emphasizing that replication operates at the DIT level not the LDAP server level. Thus in a single server running multiple DITs each DIT may be replicated to a different server. Replication occurs periodically within what this guide calls the replication cycle time. OpenLDAP historically used a separate daemon (slurpd) to perform replication but with version 2.3 a new feature (generically known as syncrepl) was introduced and indeed from 2.4 slurpd style replication has been obsoleted. There are two possible replication configurations and multiple variations on each configuration type.

  1. Master-Slave: In a master-slave configuration a single (master) DIT is capable of being updated and these updates are replicated or copied to one or more designated servers running slave DITs. The slave servers operate with read-only copies of the master DIT. Read-only users will access the servers containing the slave DITs and users who need to update the directory can only do so by accessing the server containing the master DIT. In order to confuse its poor users still further OpenLDAP has introduced the terms provider and consumer with the syncrepl replication feature. A provider may be viewed as the source (the provider) of replication services (what mere mortals would call a master) and a destination (or consumer) of replication services (what mere mortals would call a slave). Master-Slave (or provider-consumer) configurations have two obvious shortcomings:

    • Multiple locations. If all or most clients have the need to update the DIT then either they will have to access one server (running the slave DIT) for normal read access and another server (running the master DIT) to perform updates. Alternatively the clients can always access the server running the master DIT. In this latter case replication provides backup functionality only.

    • Resilience. Since there is only one server containing a master DIT it represents a single point of failure.

  2. Multi-Master: In a multi-master configuration one or more servers running master DITs may be updated and the resulting updates are propagated to the corresponding masters.

    Historically OpenLDAP has not supported multi-master operation but version 2.4 introduced a multi-master capability. In this context it may be worth pointing out two specific variations of the generic update-contention problem of all multi-master configurations identified by OpenLDAP and which are true for all LDAP systems:

    1. Value-contention If two attribute updates are performed at the same time (within the replication cycle time) with different values then, depending on the attribute type (SINGLE or MULTI-VALUED) the resulting entry may be in an incorrect or unusable state.

    2. Delete-contention If one user adds a child entry at the same time (within the replication cycle time) as another user deletes the original entry then the deleted entry will re-appear.

Figure 7.1-2 shows a number of possible replication configurations.

Replication Configurations

Figure 7.1-2 - Replication Configurations

Replication  Referral - SeaRiver - SeaRiver  Blog

Notes:

  1. RO = Read-only, RW = Read-Write
  2. LDAP1 Client facing system is a Slave and is read only. Clients must issue Writes to the Master.

  3. LDAP2 Client facing system is a Master and it is replicated to two slaves.

  4. LDAP3 is a Multi-Master and clients may issues reads and/or Writes to either system. Each master in this configuration could, in turn, have one or more slave DITs.

Up Arrow

7.2 Replication

Replication occurs at the level of the DIT and describes the process of copying updates from a DIT on one LDAP server to the same DIT on one or more other servers. Replication configurations may be either MASTER-SLAVE (the SLAVE copy is always read-only) or MULTI-MASTER. Replication is a configuration (operational) issue. Configurations for OpenLDAP and ApacheDS.

7.2.1 OpenLDAP Replication

Life was once very simple. OpenLDAP replication used slurpd and a temporary file. With version 2.3 of OpenLDAP a new method known as syncrepl (RFC 4533) was introduced while continuing support of slurpd style replication. OpenLDAP version 2.4 has discontinued support for slurpd style replication. The following sections therefore define slurpd style replication for versions up to 2.3 and the new syncrepl style from version 2.4+ (or 2.3+ for the brave).

OpenLDAP Slurpd Style Replication (up to 2.3)

Slurpd style replication is a 'push' replication (and is obsoleted from version 2.4). It is configured and controlled as shown in Figure 7.2-1:

Slurpd style replication

7.2-1 Slurpd Style Replication

When slapd (1) running the Master DIT (7) receives a modify operation (9) it updates the DIT and a timestamped copy of the transaction is written to the log file (2) defined in the master's slapd.conf (5) file replogfile directive.

Slurpd (3) when initially loaded obtains its operational parameters from slapd.conf (5). At a periodic time defined by replicationinterval slurpd will read the log file (2) defined by the replogfile directive and write the updates (10) to one (or more) slave DITs (8) defined by the replica directive(s) in slapd.conf (5).

The slave DIT (8) is a read only copy for all clients except a client which binds using the DN defined by updatedn. The slave server (4) returns the LDAP URI defined by updateref to all modify operations from clients (except those initiated using the DN in updatedn). Both updatedn and updateref are defined in the slapd.conf (6) file. The DN defined in updatedn in (6) MUST be the same as that defined in the replica directive (binddn= parameter) in (5) for this slave instance.

Replication Errors

If slurpd (3) fails to update the slave instance it creates a REJECTION file whose name is the same as that defined in the replogfile directive with .rej appended as shown below:

# slapd.conf reploglogfile directive  replogfile /var/log/ldap/slave1.log    # REJECTION file will be named  # /var/log/ldap/slave1.log.rej  

Each error message in the REJECTION log file is the same as that used in the normal transaction log but is preceded by a line starting with the keyword ERROR containing the error message. An example is shown below:

ERROR: No such attribute  replica: slave1.example.com:389  time: 809618633  dn: uid=rsmith,dc=example,dc=com  changetype: modify  replace: description  description: clown  -  replace: modifiersName  modifiersName: uid=rsmith,dc=example,dc=com  -  replace: modifyTimestamp  modifyTimestamp: 20000805073308Z  

To fix the errors either the slave may edited (in the case above to add a description attribute to the entry) or the REJECTION log may be edited to correct the errors (in the above example the replace: description line could be changed to add: description). There is no need to remove lines beginning with ERROR since these are ignored. After appropriate remedial action the REJECTION file may be re-applied by running slurpd in a single-shot mode (after stopping any currently running slurpd) using the following command:

slurpd -o -r /var/log/ldap/slave1.log.rej    # where -r defines the path to the REJECTION file   # and -o indicates single-shot mode  

Slurpd will apply the transactions in the defined (-r) file and exit. The normal slurpd should now be restarted.

Slurpd Configuration Examples:

Master slapd.conf configuration:

# slapd master  # global section - check file every 5 minutes  replicationinterval 300    # database section  database bdb  ...  # simple security to slave located at ldap-rep1.example.com   # with a cleartext password  # directive only used by slurpd  replica uri=ldap://ldap-rep1.example.com bindmethod=simple    binddn="uid=admin,ou=admin,dc=example,dc=com" credentials=guess     # saves changes to specfied file  # directive used by both slapd and slurpd  replogfile /var/log/ldap/slavedit.log  

Slave slapd.conf configuration (on host ldap-rep1.example.com):

# slapd slave  # global section - check file every 5 minutes      # database section  database bdb  ...    # defines the dn that is used in the  # replica directive of the master  # directive only used by slurpd  updatedn "uid=admin,ou=admin,dc=example,dc=com"     # referral given if a client tries to update slave  updateref ldap://master-ldap.example.com  

Up Arrow

OpenLDAP syncrepl Style Replication (from 2.3)

OpenLDAP version 2.3 introduced support for a new LDAP Content Synchronization protocol and from version 2.4 this has become the only replication capability supported (slurpd style is now obsoleted). The LDAP Content Synchronization protocol is defined by RFC 4533 and generically known by the name of its controlling slapd.conf directive - syncrepl. syncrepl provides both classic master-slave replication and since version 2.4 allows for multi-master replication. The protocol uses the terms provider (rather than master) to define the source of the replication updates and the term consumer (rather than slave) to define a destination for the updates.

In syncrepl style replication the consumer always initiates the update process - unlike slurpd style where the master (provider) initiates the updates. The consumer may be configured to periodically pull the updates from the provider (refreshOnly) or request the provider to push updates (refreshAndPersist). In all cases, in order to unambiguously refer to an entry the server must maintain a universally unique number (entryUUID) for each entry in a DIT. The process of synchronization is shown in Figure 7.2-2 (refreshOnly) and 7.2-3 (refreshAndPersist):

7.2.1.2.1 Replication refreshOnly (Consumer Pull)

syncrepl style replication

7.2-2 syncrepl Provider-Consumer Replication - refreshOnly

A slapd server (1) that wants to replicate a DIT (8) (the consumer) is configured using a syncrepl directive in its slapd.conf file (6). The syncrepl directive defines the location (the name) of the slapd server (3) (the provider) containing the master copy of the DIT. The provider (3) is configured using the overlay syncprov directive in its slapd.conf file (5).

In refreshOnly type of replication the consumer (1) initiates a connection (2) with the provider (2) - synchronization of DITs takes places and the connection is broken. Periodically the consumer (1) re-connects (2) with the provider (3) and re-synchronizes. refreshOnly synchronization may be viewed as operating in burst mode and the replication cycle time is the time between re-connections.

More Detail: The consumer (1) opens a session (2) with the provider (3) and requests refreshOnly synchronization. Any number of consumers can contact the provider and request synchronization service - the provider is not configured with any information about its consumer(s) - as long as a consumer satisfies the security requirements of the provider its synchronization requests will be satisfied. The synchronization request is essentially an extended LDAP search operation which defines the replication scope - using normal LDAP search parameters (base DN, scope,search filter and attributes) - thus the whole, or part, of the providers DIT may be updated in a replica synchronization session depending on the search criteria.

The provider is not required to maintain per-consumer state information. Instead at the end of a synchronization session the provider sends a synchronization cookie (SyncCookie 11) - this cookie contains a Change Sequence Number (contextCSN) - essentially a timestamp indicating the last change sent to this consumer and which may be viewed as a change or synchronization checkpoint. When the consumer initiates a session the last cookie (10) it received from the provider is supplied to indicate to the provider the limits of this synchronization session. Depending on how the consumer was initialised the first time the consumer initiates communication it may not have a SyncCookie and thus the initial synchronization covers all records in the providers DIT (within the synchronization scope). A byproduct of this process allows the replica consumer to start with an empty DIT.

The provider (3) for the DIT will respond using one or both of two phases. The first is the present phase (13) and indicates those entries which are still in the DIT (or DIT fragment) and consists of:

  1. For each entry that has changed (since the last synchronization) - the complete entry is sent including its DN and its UUID (entryUUID). The consumer can reliably update its entries from this data.

  2. For each entry that has NOT changed (since the last synchronization) an empty entry with its UUID (entryUUID) is sent.

  3. No messages are sent for entries which have been deleted. Theoretically at the end of the two previous processes the consumer may delete entries not referenced in either.

In the delete phase (14):

  1. The provider returns the DN and UUID (entryUUID) for each entry deleted since the last synchronization. The consumer can reliably delete these entries.

Whether both phases are required is determined by a number of additional techniques.

At the end of the synchronization phase(s) the provider sends a SyncCookie (the current contextCSN) and terminates the session. The consumer saves the SyncCookie and will initiate another synchronization session defined by the interval parameter of its syncrepl directive by sending the last received SyncCookie to limit the scope of the subsequent synchronization session.

syncrepl refreshOnly Configuration Examples:

Master slapd.conf configuration (assumed host name master-ldap.example.com):

# slapd master  # global section  ...    # database section  database bdb  ...  # allows read access from consumer  # may need merging with other ACL's  access to *       by dn.base="cn=admin,ou=people,dc=example,dc=com" read       by * break     # NOTE:   # the provider configuration contains no reference to any consumers    # define the provider to use the syncprov overlay  # (last directives in database section)  overlay syncprov  # allows contextCSN to saves to database every 100 updates or ten minutes  syncprov-checkpoint 100 10  

consumer slapd.conf configuration:

# slapd consumer  # global section      # database section  database bdb  ...    # provider is ldap://master-ldap.example.com:389, sync interval   # every 1 hour, whole DIT (searchbase), all user attributes synchronized  # simple security with cleartext password  # NOTE: comments inside the syncrepl directive are rejected by OpenLDAP  #       and are included only to carry further explanation. They MUST NOT  #       appear in an operational file  syncrepl rid=000     provider=ldap://master-ldap.example.com    type=refreshOnly  # re-connect/re-sync every hour    interval=00:1:00:00    retry="5 5 300 +"     searchbase="dc=example,dc=com"  # both user (*) and operational (+) attributes required    attrs="*,+"    bindmethod=simple    binddn="cn=admin,ou=people,dc=example,dc=com"  # Warning: password sent in clear - insecure    credentials=dirtysecret  

Up Arrow

7.2.1.2.2 Replication refreshAndPersist (provider Push)

syncrepl style replication

7.2-3 syncrepl Provider-Consumer Replication - refreshAndPersist

A slapd server (1) that wants to replicate a DIT (7) from a server (3) (the provider) is configured using a syncrepl directive in its slapd.conf file (6). The syncrepl directive defines the location (the name) of the slapd server (3) (the provider) containing the master copy of the DIT. The provider (3) is configured using the overlay syncprov directive in its slapd.conf file (5).

In refreshAndPersist type of replication the consumer (1) initiates a connection (2) with the provider (3) - synchronization (12) of DITs takes places immediately and at the end of this process the connection is maintained (it persists). Subsequent changes (4) to the provider (3) are immediately propagated to the consumer(1).

More Detail: The consumer (1) opens a session (2) with the provider (3) and requests refreshAndPersist synchronization. Any number of consumers can contact the provider and request synchronization service - the provider is not configured with any information about its consumer(s) - as long as any consumer satisfies the security requirements of the provider its synchronization requests will be satisfied. The synchronization request is essentially an extended LDAP search operation which defines the replication scope - using normal LDAP search parameters (base DN, scope,search filter and attributes) - thus the whole, or part, of the providers DIT may be updated in a replica synchronization session depending on the search criteria.

The provider (3) is not required to maintain per-consumer state information. Instead the provider periodically sends a synchronization cookie (SyncCookie 11) - this cookie contains a Change Sequence Number (contextCSN) - essentially a timestamp indicating the last change sent to this consumer and which may be viewed as a change or synchronization checkpoint. When a refreshAndPersist consumer (1) opens a session (2) with a provider (3) they must first synchronize the state of their DIT (or DIT fragment). Depending on how the consumer was initialised, when it initially opens a session (2) with the provider (3) it may not have a SyncCookie and therefore the scope of the changes is the entire DIT (or DIT fragment). A byproduct of this allows the consumer to start with an empty replication DIT. When a consumer (1) subsequently connects (2) to the provider (3) it will have a SyncCookie. In the case of a refreshAndPersist type of replication re-connection will occur after a failure of the provider, consumer or network each of which will terminate a connection which otherwise is maintained permanently.

During the synchronization process the provider (3) for the DIT will respond with one or both of two phases. The first is the present phase (13) and indicates those entries which are still in the DIT (or DIT fragment) and consists of:

  1. For each entry that has changed (since the last synchronization) - the complete entry is sent including its DN and its UUID (entryUUID). The consumer can reliably update its entries from this data.

  2. For each entry that has NOT changed (since the last synchronization) an empty entry with its UUID (entryUUID) is sent.

  3. No messages are sent for entries which have been deleted. Theoretically at the end of the two previous processes the consumer may delete entries not referenced in either.

In the delete phase (14):

  1. The provider returns the DN and UUID (entryUUID) for each entry deleted since the last synchronization. The consumer can reliably delete these entries.

Whether both phases are required is determined by a number of additional techniques.

At the end of the synchronization phase(s) (12) the provider typically sends a SyncCookie (the current contextCSN) and MAINTAINS the session. Subsequent updates (4) - which may be changes, additions or deletions - to the provider's DIT (7) will be immediately sent (15) by the provider (3) to the consumer (1) where the replica DIT (8) can be updated. Changes or additions result in the complete entry (including all attributes) being transferred and the SyncCookie (11) is typically updated as well. The provider DIT (7) and consumer DIT (8) are maintained in synchronization with the replication cycle time approaching the transmission time between provider and consumer.

syncrepl refreshAndPersist Configuration Examples:

Master slapd.conf configuration (assumed host name master-ldap.example.com):

# slapd provider (master)  # global section  ...    # database section  database bdb  ...  # allows read access from consumer  # may need merging with other ACL's  # referenced dn.base must be same as binddn= of consumer  #   access to *       by dn.base="cn=admin,ou=people,dc=example,dc=com" read       by * break        # NOTE:   # the provider configuration contains no reference to any consumers    # define as provider using the syncprov overlay  # (last directives in database section)  overlay syncprov  # contextCSN saved to database every 100 updates or ten minutes  syncprov-checkpoint 100 10  

consumer slapd.conf configuration:

# slapd consumer (slave)  # global section      # database section  database bdb  ...    # provider is ldap://master-ldap.example.com:389,  # whole DIT (searchbase), all user attributes synchronized  # simple security with a cleartext password  # NOTE: comments inside the syncrepl directive are rejected by OpenLDAP  #       and are included only to carry further explanation. They MUST NOT  #       appear in an operational file  syncrepl rid=000     provider=ldap://master-ldap.example.com    type=refreshAndPersist    retry="5 5 300 +"     searchbase="dc=example,dc=com"  # both user (*) and operational (+) attributes required    attrs="*,+"    bindmethod=simple    binddn="cn=admin,ou=people,dc=example,dc=com"  # warning: password sent in clear - insecure    credentials=dirtysecret  

Up Arrow

7.2.1.2.3 OpenLDAP syncrepl (N-Way) Multi-Master

OpenLDAP 2.4 introduced N-way Multi-Master support. In N-Way Multi-Master configurations any number of masters may be synchronized with one another. The functionality of replication has been previously described for refreshOnly and refreshAndPersist and is not repeated here. The following notes and configuration examples are unique to N-Way Multi-Mastering.

Note: When running N-Way Multi-Mastering it is vital that the clocks on all the master (providers) are synchronized to the same time source, for example, they should all run NTP (Network Time Protocol).

In N-Way Multi-Mastering each provider of synchronization services is also a consumer of synchronization services as shown in Figure 7.2-4:

syncrepl N-Way Multi-Master replication

Figure 7.2-4: syncrepl N-Way Multi-Mastering

Figure 7.2-4 shows a 3-Way Multi-Master (1, 2, 3) Configuration. Each Master is configured - in its slapd.conf file (4, 5, 6) - as a provider (using the overlay syncprov directive) and as a consumer for all of the other masters (using the syncrepl directive). Each provider must be uniquely identified using a ServerID directive. Each provider is further, as noted above, synchronized to a common clock source. Thus the provider (1) of DIT (7) contains an overlay syncprov directive (the provider overlay) and two refreshAndPersist type syncrepl directives, one for each of the other providers (2, 3) as shown by the communication links (1-1, 1-2). Similarly each of the other providers has a similar configuration - a single provider capability and refreshAndPersist syncrepl directives for the other two masters.

In this configuration assuming that a refreshAndPersist type of synchronization is used - it is not clear why you would even want to use refreshOnly - then a modify (10) to any master will be immediately propagated to all the other masters (providers).

Version 2.4 of N-Way Multi-Master replication does not support delta synchronization.

syncrepl refreshAndPersist Configuration Examples:

Assume three masters (ldap1.example.com, ldap2.example.com and ldap3.example.com) using syncrepl N-Way multi-master - then the three masters would have slapd.conf files as shown:

slapd.conf for ldap1.example.com:

# slapd master ldap1.example.com  # global section  ...  # uniquely identifies this server  serverID 001  # database section  database bdb  ...  # allows read access from all consumers  # and assumes that all masters will use a binddn with this value  # may need merging with other ACL's  access to *       by dn.base="cn=admin,ou=people,dc=example,dc=com" read       by * break        # NOTE:   # syncrepl directives for each of the other masters  # provider is ldap://ldap2.example.com:389,  # whole DIT (searchbase), all user and operational attributes synchronized  # simple security with cleartext password  syncrepl rid=000     provider=ldap://ldap2.example.com    type=refreshAndPersist    retry="5 5 300 +"     searchbase="dc=example,dc=com"    attrs="*,+"    bindmethod=simple    binddn="cn=admin,ou=people,dc=example,dc=com"    credentials=dirtysecret    # provider is ldap://ldap3.example.com:389,  # whole DIT (searchbase), user and operational attributes synchronized  # simple security with cleattext password  syncrepl rid=001    provider=ldap://ldap3.example.com    type=refreshAndPersist    retry="5 5 300 +"     searchbase="dc=example,dc=com"    attrs="*,+"    bindmethod=simple    binddn="cn=admin,ou=people,dc=example,dc=com"    credentials=dirtysecret  ...  # syncprov specific indexing (add others as required)  index entryCSN eq  index entryUUID eq   ...  # mirror mode essential to allow writes  # and must appear after all syncrepl directives  mirrormode TRUE    # define the provider to use the syncprov overlay  # (last directives in database section)  overlay syncprov  # contextCSN saved to database every 100 updates or ten minutes  syncprov-checkpoint 100 10  

slapd.conf for ldap2.example.com:

# slapd master ldap2.example.com  # global section  ...  # uniquely identifies this server  ServerID 002  # database section  database bdb  ...  # allows read access from all consumers  # and assumes that all masters will use a binddn with this value  # may need merging with other ACL's  access to *       by dn.base="cn=admin,ou=people,dc=example,dc=com" read       by * break        # NOTE:   # syncrepl directives for each of the other masters  # provider is ldap://ldap1.example.com:389,  # whole DIT (searchbase), user and operational attributes synchronized  # simple security with a cleartext password  syncrepl rid=000     provider=ldap://ldap1.example.com    type=refreshAndPersist    retry="5 5 300 +"     searchbase="dc=example,dc=com"    attrs="*,+"    bindmethod=simple    binddn="cn=admin,ou=people,dc=example,dc=com"    credentials=dirtysecret    # provider is ldap://ldap3.example.com:389,  # whole DIT (searchbase), all user attributes synchronized  # simple security with a cleartext password  syncrepl rid=001    provider=ldap://ldap3.example.com    type=refreshAndPersist    retry="5 5 300 +"     searchbase="dc=example,dc=com"    attrs="*,+"    bindmethod=simple    binddn="cn=admin,ou=people,dc=example,dc=com"    credentials=dirtysecret  ...  # mirror mode essential to allow writes  # and must appear after all syncrepl directives  mirrormode TRUE    # syncprov specific indexing (add others as required)  index entryCSN eq  index entryUUID eq   ...  # define the provider to use the syncprov overlay  # (last directives in database section)  overlay syncprov  # contextCSN saved to database every 100 updates or ten minutes  syncprov-checkpoint 100 10  

slapd.conf for ldap3.example.com:

# slapd master ldap3.example.com  # global section  ...  # uniquely identifies this server  ServerID 003  # database section  database bdb  ...  # allows read access from all consumers  # and assumes that all masters will use a binddn with this value  # may need merging with other ACL's  access to *       by dn.base="cn=admin,ou=people,dc=example,dc=com" read       by * break        # NOTE:   # syncrepl directives for each of the other masters  # provider is ldap://ldap1.example.com:389,  # whole DIT (searchbase), user and operational attributes synchronized  # simple security with a cleartext password  syncrepl rid=000     provider=ldap://ldap1.example.com    type=refreshAndPersist    retry="5 5 300 +"     searchbase="dc=example,dc=com"    attrs="*,+"    bindmethod=simple    binddn="cn=admin,ou=people,dc=example,dc=com"    credentials=dirtysecret    # provider is ldap://ldap2.example.com:389,  # whole DIT (searchbase), all user attributes synchronized  # simple security with a cleartext password  syncrepl rid=001     provider=ldap://ldap2.example.com    type=refreshAndPersist    retry="5 5 300 +"     searchbase="dc=example,dc=com"    attrs="*,+"    bindmethod=simple    binddn="cn=admin,ou=people,dc=example,dc=com"    credentials=dirtysecret    # syncprov specific indexing (add others as required)  index entryCSN eq  index entryUUID eq   # mirror mode essential to allow writes  # and must appear after all syncrepl directives  mirrormode TRUE    # define the provider to use the syncprov overlay  # (last directives in database section)  overlay syncprov  # contextCSN saved to database every 100 updates or ten minutes  syncprov-checkpoint 100 10  

Notes:

  1. Since the masters all replicate the same DIT the binddn is shown as having the same value throughout. This is perfectly permissible. However if any single server is subverted - all are subverted. A safer strategy may be to use a unique binddn entry for each server. This will require changes in the syncrepl and access to directives.

  2. Each rid parameter of the syncrepl directive must be unique within the slapd.conf file (the server). The serverid value must be unique to each server. There is no relationship between rid and serverid values.

  3. mirrormode true is required (confusingly) for multi-master configurations and must appear after all the syncrepl directives in the database section. Omitting this directive in any master configuration will cause all writes to fail!

  4. Multi-mastering requires clock synchronization between all the servers. Each server should be an NTP client and all servers should point to the same clock source. It is not enough to use a command such as ntpdate on server load or a similar technique since clock drift can be surprisingly large.

  5. Update contention is one or many problems encountered in multi-master replication. OpenLDAP uses a timestamp to perform this function. Thus if updates are performed to the same attribute(s) at roughly the same time (within the propagation time difference) on separate servers then one of the updates will be lost for the attribute in question. The one that is lost will have a lower timestamp value - the difference need only be milliseconds. This is an unavoidable by-product of multi-mastering. NTP will minimize the occurrence of this problem.

Up Arrow

7.2.1.2.4 Session Logs, Access Logs and Delta-sync

It has all been simple up until this point. Now it gets a bit messy. All in the interests of reducing traffic flows between provider and consumer.

refreshOnly synchronization can have a considerable time lag before update propagation - depending on the interval parameter of the syncrepl directive. As the time interval between updates is reduced (to minimise propagation delays) it effectively approaches refreshAndPersist but it incurs an initial synchronization on every re-connection. If the present phase is needed during synchronization then even if no changes have been made since the last synchronization, every unchanged entry will result in an empty entry (with its identifying entryUUID) which can take a considerable period of time.

In refreshAndPersist mode re-synchronization only occurs on initial connection (re-connection only occurs after a major error to the provider, consumer or network). However even in this mode updates to any attribute in an entry will cause the entire entry to transferred. For very big entries when a trivial change is made this can cause an unacceptable overhead.

Finally pathological LDAP implementations can create update problems. As a worst case assume an application is run daily that has the effect of changing a 1 octet value in every DIT entry. This would have the effect of transferring the entire DIT to all consumers - perhaps a tad inconvenient or perhaps even impossible within a 24 hour period.

OpenLDAP provides two methods to ameliorate these problems: the session log and the access log. The objective of both methods is to minimise data transfer and in the case of Access Logs to provide what is called delta-synchronization (only transferring entry changes not whole entries).

Session Logs

The syncprov overlay has the ability to maintain a session log. The session log parameter takes the form:

syncprov-sessionlog ops  # where ops defines the number of  # operations that can be stored  # and is wrapped when full  # NOTE: version 2.3 showed a sid parameter  #       which was removed in 2.4    # example of syncprov definition  # including a session log of 100 entries (changes)  overlay syncprov  syncprov-checkpoint 100 10  syncprov-sessionlog 100  

The session log is a memory based log and contains all operations (except add operations). Depending on the time period covered by the session log it may allow the provider to skip the optional present phase - thus significantly speeding up the synchronization process. If, for example, no changes have occurred in the session log since the last synchronization there is no need for a present phase. The session log may be used with any type (refreshOnly or refreshAndPersist) but is clearly most useful with refreshOnly. If the session log is not big enough to cover the period since the last consumer synchronization request then a full re-synchronization sequence (including the present phase) is invoked. No special syncrepl parameters are required in the consumer when using the session log.

Access Log (Delta Synchronization)

The accesslog provides a log of LDAP operations on a target DIT and makes them available in a related, but separate accesslog DIT. The accesslog functionality is provided by the overlay accesslog directive.

In the normal case, replica synchronization operations perform the update using information in the DIT which is the subject of the search request contained within the Synchronization request. Alternatively the synchronization may be performed by targeting the syncrepl directive at the accesslog instead. Since the objects stored in the access log only contain the changes (including deletes, adds, renames and modrn operations) the volume of data is significantly lower than when performing a full synchronization operation against the main DIT (or even a DIT fragment) where if any attribute is changed the entire entry is transferred. Use of the accesslog is known as delta Replication or delta synchronization or even delta-syncrepl.

Use of this form of replication requires definition of an accesslog DIT in the provider and the use of the logbase, logfilter and syncdata parameters of the syncrepl directive in the consumer as shown in the example below:

Delta Replication (Accesslog) Examples:

Provider configuration (assumed hostname of ldap1.example.com):

# slapd provider ldap1.example.com  # global section  ...  # allow read access to target and accesslog DITs  # to consumer. This form applies a global access rule  # the DN used MUST be the same as that used in the binddn   # parameter of the syncrepl directive of the consumer  access to *       by dn.base="cn=admin,ou=people,dc=example,dc=com" read       by * break   ...  # database section - target DIT  # with suffix dc=example,dc=com  database bdb  suffix "dc=example,dc=com"  ...  # syncprov specific indexing (add others as required)  # not essential but improves performance  index entryCSN,entryUUID eq  ...  # define access log overlay and parameters  # prunes the accesslog every day:  # deletes entries more than 2 days old  # log writes (covers add, delete, modify, modrdn)  # log only successful operations  # log has base suffix of cn=deltalog  overlay accesslog  logdb "cn=deltalog"  logops writes  logsuccess TRUE   logpurge 2+00:00 1+00:00    # define the replica provider for this database  # (last directives in database section)  overlay syncprov  # contextCSN saved to database every 100 updates or ten minutes  syncprov-checkpoint 100 10    # now define the accesslog DIT   # normal database definition  database bdb  ...  suffix "cn=deltalog"  # these are recommended to optimize accesslog  index default eq  index entryCSN,objectClass,reqEnd,reqResult,reqStart   ...  # the access log is also a provider  # syncprov-nopresent inhibits the present phase of synchronization  # syncprov-reloadhint TRUE mandatory for delta sync  overlay syncprov  syncprov-nopresent TRUE  syncprov-reloadhint TRUE     

Consumer configuration:

# slapd master ldap3.example.com  # global section  ...    # database section  database bdb  suffix "dc=example,dc=com"  ...    # NOTE:   # syncrepl directive will use the accesslog for   # delta synchronization  # provider is ldap://ldap1.example.com:389,  # whole DIT (searchbase), all user attributes synchronized  # simple security with a cleartext password  # binddn is used to authorize access in provider  # logbase references logdb (deltalog) in provider  # logfilter allows successful add, delete, modify, modrdn ops  # syncdata defines this to use accesslog format  syncrepl rid=000     provider=ldap://ldap1.example.com    type=refreshAndPersist    retry="5 5 300 +"     searchbase="dc=example,dc=com"    attrs="*,+"    bindmethod=simple    binddn="cn=admin,ou=people,dc=example,dc=com"    credentials=dirtysecret    logbase="cn=deltalog"    logfilter="(&(objectClass=auditWriteObject)(reqResult=0))"    syncdata=accesslog   ...  

Notes:

  1. The logfilter search ("(&(objectClass=auditWriteObject)(reqResult=0))") reads all standard entries in the accesslog that were successful (reqResult=0) - since we defined the only entries to be logged as successful ones in the provider (logsuccess TRUE) this is theoretically redundant but can do no harm.
  2. The database definition for the accesslog (cn=deltalog) in the provider does not contain rootdn or rootpw directives since these are not required. There is further no access to directive which means the global one defined at the start of the provider slapd.conf file is used. This user is defined to have minimal access to both the target and accesslog DITs at the expense of having to define a real entry in the target DIT. Using the rootdn and rootpw of the target DIT as a binddn within the consumer syncrepl would also work (assuming the global access to directive was removed and an appropriate access to directive added to the accesslog DIT definition) but this exposes high grade user to potential sniffing attacks and is not advised.
  3. The consumer may be started with an empty DIT in which case normal synchronization occurs initially and when complete subsequent updates occur via the accesslog mechanism.
  4. When the accesslog transmits a change the new attribute value, entryCSN, modifiersName (DN) and the ModifyTimestamp are supplied to the consumer. The latter three attributes are all operational and required to provide a perfect replica.

Up Arrow

7.2.2 ApacheDS Replication

One day real soon now ?

Replication  Referral - SeaRiver - SeaRiver  Blog

Up Arrow

7.2.3 Synching DIT before Slurpd Replication

Before slurpd replication can occur the DITs (Master and Slave(s)) must be known to be in the same state. A manual synchronization process must be performed first as itemised below:

Note: In the case of OpenLDAP using syncrepl style replication a slave or multi-master can synchronize starting from an empty DIT. However the process defined below may also be used and depending on the volumes involved may offer a more efficient (quicker) starting point.

  1. Stop the LDAP server that will contain the master DIT instance. This is essential to prevent further DIT updates.

  2. Create an LDIF copy of the DIT to be replicated using the appropriate off-line tools for the LDAP server.

    For OpenLDAP use slapcat).

  3. Configure the server running the master DIT instance.

    For OpenLDAP using slurp style replication - add the replica, replogfile and replicationinterval directives to the slapd.conf file. Do not restart the server at this time.

    Note: If running OpenLDAP using the run-time configuration feature (cn=config) the server must be active - detailed instructions to be supplied.

  4. Move the LDIF file created in step 2 above to the the server(s) that will run the slave or multi-master instance.

  5. Stop the LDAP server that will run the slate or multi-master instance.

  6. Apply the LDIF file moved in step 4 to the server using the appropriate off-line tools for the LDAP server.

    For OpenLDAP use slapadd). Since the server has not be configured the -n (dbnum) option should be used.

  7. Move the LDIF file created in step 2 above to the the server(s) that will run the slave or multi-master instance.

  8. Configure the server that will run the slave or multi-master instance to act as either a slave or a multi-master.

    For OpenLDAP using slurpd style replication this will involve defining a database directive and all its associated directives (since the -n dbnum directive was used to add the DIT the order in which this is defined is very important - for replication specifically add the updatedn directive and the updateref directives.

    Note: If running OpenLDAP using the run-time configuration feature (cn=config) the server must be active - detailed instructions to be supplied.

  9. If a master-slave configuration, start the server running the slave DIT instance. Confirm it is working. If a multi-master configuration start this copy of the master and confirm it is running.

  10. Start the server running the master instance of the DIT or the second master in a multi-master configuration.

  11. Perform a test transaction on the master (one of the masters in a multi-master configuration) and confirm it has been propagated to the slave (or second master). If not start looking at the logs. And panic. Always helps.

Up Arrow

7.2.3.1 Synching DIT before syncrepl Replication

When initiating syncrepl replication there are two possible strategies:

  1. Do nothing. After configuring the consumer there is no need to do anything further. The initial synchronization request will synchronize the replica from an empty state. Where the DIT is very large this may take an unacceptably long period of time.

  2. Load an LDIF copy of the replica from the provider using slapadd before starting the replication. Depending on how this is done the initial synchronization may be minimal or non-existent. The following instructions itemize such a process when using a provider running OpenLDAP 2.2+ and assume that the provider has been configured for replication:

    1. Save an LDIF copy of the provider's DIT (using a Browser or even slapcat if using a BDB or HDB backend). There is no need to stop the provider since any inconsistencies during the saving process or between the state when the DIT was saved and loaded into the consumer will be resolved during initial synchronization.

    2. Move the LDIF file to the consumer location.

    3. Configure the consumer.

    4. Load the LDIF into the consumer using slapadd with the -w option to create the most up-to-date SyncCookie. Example:

      slapadd -l /path/to/provider/copy/ldif -w  
    5. Start the consumer:

Up Arrow

Copyright ? 1994 - 2010 ZyTrax, Inc.
All rights reserved. Legal and Privacy
site by zytrax
Hosted by super.net.sg
web-master at zytrax
Page modified: September 20 2008.

 

               http://www.zytrax.com/books/ldap/ch7/#contents

  评论这张
 
阅读(1749)| 评论(0)

历史上的今天

评论

<#--最新日志,群博日志--> <#--推荐日志--> <#--引用记录--> <#--博主推荐--> <#--随机阅读--> <#--首页推荐--> <#--历史上的今天--> <#--被推荐日志--> <#--上一篇,下一篇--> <#-- 热度 --> <#-- 网易新闻广告 --> <#--右边模块结构--> <#--评论模块结构--> <#--引用模块结构--> <#--博主发起的投票-->
 
 
 
 
 
 
 
 
 
 
 
 
 
 

页脚

网易公司版权所有 ©1997-2018