Compare commits

...
Sign in to create a new pull request.

7 commits

Author SHA1 Message Date
Viktor Ashirov
e47a703688 Resolves: Issue 6997 - Logic error in get_bdb_impl_status prevents bdb2mdb execution 2025-09-18 20:23:04 +02:00
Viktor Ashirov
f6847ae714 Update to 3.1.3-2
- Resolves: Issue 5120 - Fix compilation error
- Resolves: Issue 6782 - Improve paged result locking
- Resolves: Issue 6929 - Compilation failure with rust-1.89 on Fedora ELN
- Resolves: Issue 6822 - Backend creation cleanup and Database UI tab error handling (#6823)
- Resolves: Issue 6753 - Add 'add_exclude_subtree' and 'remove_exclude_subtree' methods to Attribute uniqueness plugin
- Resolves: Issue 6857 - uiduniq: allow specifying match rules in the filter
- Resolves: Issue 6756 - CLI, UI - Properly handle disabled NDN cache (#6757)
- Resolves: Issue 6854 - Refactor for improved data management (#6855)
- Resolves: Issue 6850 - AddressSanitizer: memory leak in mdb_init
- Resolves: Issue 6848 - AddressSanitizer: leak in do_search
- Resolves: Issue 6865 - AddressSanitizer: leak in agmt_update_init_status
- Resolves: Issue 6868 - UI - schema attribute table expansion break after moving to a new page
- Resolves: Issue 6859 - str2filter is not fully applying matching rules
- Resolves: Issue 6872 - compressed log rotation creates files with world readable permission
- Resolves: Issue 6888 - Missing access JSON logging for TLS/Client auth
- Resolves: Issue 6772 - dsconf - Replicas with the "consumer" role allow for viewing and modification of their changelog. (#6773)
- Resolves: Issue 6893 - Log user that is updated during password modify extended operation
- Resolves: Issue 6901 - Update changelog trimming logging
- Resolves: Issue 6430 - implement read-only bdb (#6431)
- Resolves: Issue 6663 - Fix NULL subsystem crash in JSON error logging (#6883)
- Resolves: Issue 6895 - Crash if repl keep alive entry can not be created
- Resolves: Issue 6884 - Mask password hashes in audit logs (#6885)
- Resolves: Issue 6778 - Memory leak in roles_cache_create_object_from_entry part 2
- Resolves: Issue 6901 - Update changelog trimming logging - fix tests
- Resolves: Issue 6181 - RFE - Allow system to manage uid/gid at startup
- Resolves: Issue 6468 - CLI - Fix default error log level
- Resolves: Issue 6768 - ns-slapd crashes when a referral is added (#6780)
- Resolves: Issue 6430 - Fix build with bundled libdb
2025-08-21 15:32:38 +02:00
Viktor Ashirov
04be94a5bd Update to 3.1.3 2025-07-08 16:25:42 +02:00
Viktor Ashirov
5e2ae600d0 Update to 3.1.2-3
- Resolves: Issue 6489 - After log rotation refresh the FD pointer
- Resolves: Issue 6554 - During import of entries without nsUniqueId, a supplier generates duplicate nsUniqueId (LMDB only)
- Resolves: Issue 6555 - Potential crash when deleting a replicated backend
2025-02-14 14:38:59 +01:00
Viktor Ashirov
30045e7e34 Replace python3-magic with python3-file-magic 2025-01-27 11:42:37 +01:00
Viktor Ashirov
e9db1f7c5a Update to 3.1.2 2025-01-24 09:28:21 +01:00
Viktor Ashirov
b3f6ccd011 Resolves: VLV errors in Fedora 40 with RSNv3 and pruning enabled (rhbz#2317851) 2024-10-15 17:44:50 +02:00
32 changed files with 28773 additions and 420 deletions

232
.gitignore vendored
View file

@ -1,232 +1,4 @@
*~
/389-ds-base-1.2.7.2.tar.bz2
/389-ds-base-1.2.7.3.tar.bz2
/389-ds-base-1.2.7.4.tar.bz2
/389-ds-base-1.2.7.5.tar.bz2
/389-ds-base-1.2.8.a1.tar.bz2
/389-ds-base-1.2.8.a2.tar.bz2
/389-ds-base-1.2.8.a3.tar.bz2
/389-ds-base-1.2.8.rc1.tar.bz2
/389-ds-base-1.2.8.rc2.tar.bz2
/389-ds-base-1.2.8.rc4.tar.bz2
/389-ds-base-1.2.8.rc5.tar.bz2
/389-ds-base-1.2.8.0.tar.bz2
/389-ds-base-1.2.8.1.tar.bz2
/389-ds-base-1.2.8.2.tar.bz2
/389-ds-base-1.2.8.3.tar.bz2
/389-ds-base-1.2.9.a1.tar.bz2
/389-ds-base-1.2.9.a2.tar.bz2
/389-ds-base-1.2.9.0.tar.bz2
/389-ds-base-1.2.9.1.tar.bz2
/389-ds-base-1.2.9.2.tar.bz2
/389-ds-base-1.2.9.3.tar.bz2
/389-ds-base-1.2.9.4.tar.bz2
/389-ds-base-1.2.9.5.tar.bz2
/389-ds-base-1.2.9.6.tar.bz2
/389-ds-base-1.2.9.7.tar.bz2
/389-ds-base-1.2.9.8.tar.bz2
/389-ds-base-1.2.9.9.tar.bz2
/389-ds-base-1.2.9.10.tar.bz2
/389-ds-base-1.2.10.a1.tar.bz2
/389-ds-base-1.2.10.a2.tar.bz2
/389-ds-base-1.2.10.a3.tar.bz2
/389-ds-base-1.2.10.a4.tar.bz2
/389-ds-base-1.2.10.a5.tar.bz2
/389-ds-base-1.2.10.a6.tar.bz2
/389-ds-base-1.2.10.a7.tar.bz2
/389-ds-base-1.2.10.a8.tar.bz2
/389-ds-base-1.2.10.rc1.tar.bz2
/389-ds-base-1.2.10.0.tar.bz2
/389-ds-base-1.2.10.1.tar.bz2
/389-ds-base-1.2.10.2.tar.bz2
/389-ds-base-1.2.10.3.tar.bz2
/389-ds-base-1.2.10.4.tar.bz2
/389-ds-base-1.2.11.a1.tar.bz2
/389-ds-base-1.2.11.1.tar.bz2
/389-ds-base-1.2.11.2.tar.bz2
/389-ds-base-1.2.11.3.tar.bz2
/389-ds-base-1.2.11.4.tar.bz2
/389-ds-base-1.2.11.5.tar.bz2
/389-ds-base-1.2.11.6.tar.bz2
/389-ds-base-1.2.11.7.tar.bz2
/389-ds-base-1.2.11.8.tar.bz2
/389-ds-base-1.2.11.9.tar.bz2
/389-ds-base-1.2.11.10.tar.bz2
/389-ds-base-1.2.11.11.tar.bz2
/389-ds-base-1.2.11.12.tar.bz2
/389-ds-base-1.2.11.13.tar.bz2
/389-ds-base-1.2.11.14.tar.bz2
/389-ds-base-1.2.11.15.tar.bz2
/389-ds-base-1.3.0.a1.tar.bz2
/389-ds-base-1.3.0.rc1.tar.bz2
/389-ds-base-1.3.0.rc2.tar.bz2
/389-ds-base-1.3.0.rc3.tar.bz2
/389-ds-base-1.3.0.0.tar.bz2
/389-ds-base-1.3.0.1.tar.bz2
/389-ds-base-1.3.0.2.tar.bz2
/389-ds-base-1.3.0.3.tar.bz2
/389-ds-base-1.3.0.4.tar.bz2
/389-ds-base-1.3.0.5.tar.bz2
/389-ds-base-1.3.1.0.tar.bz2
/389-ds-base-1.3.1.1.tar.bz2
/389-ds-base-1.3.1.2.tar.bz2
/389-ds-base-1.3.1.3.tar.bz2
/389-ds-base-1.3.1.4.tar.bz2
/389-ds-base-1.3.1.5.tar.bz2
/389-ds-base-1.3.1.6.tar.bz2
/389-ds-base-1.3.1.7.tar.bz2
/389-ds-base-1.3.1.8.tar.bz2
/389-ds-base-1.3.1.9.tar.bz2
/389-ds-base-1.3.1.10.tar.bz2
/389-ds-base-1.3.1.11.tar.bz2
/389-ds-base-1.3.2.0.tar.bz2
/389-ds-base-1.3.2.1.tar.bz2
/389-ds-base-1.3.2.2.tar.bz2
/389-ds-base-1.3.2.3.tar.bz2
/389-ds-base-1.3.2.4.tar.bz2
/389-ds-base-1.3.2.5.tar.bz2
/389-ds-base-1.3.2.6.tar.bz2
/389-ds-base-1.3.2.7.tar.bz2
/389-ds-base-1.3.2.8.tar.bz2
/389-ds-base-1.3.2.9.tar.bz2
/389-ds-base-1.3.2.10.tar.bz2
/389-ds-base-1.3.2.11.tar.bz2
/389-ds-base-1.3.2.12.tar.bz2
/389-ds-base-1.3.2.13.tar.bz2
/389-ds-base-1.3.2.14.tar.bz2
/389-ds-base-1.3.2.15.tar.bz2
/389-ds-base-1.3.2.16.tar.bz2
/389-ds-base-1.3.2.17.tar.bz2
/389-ds-base-1.3.2.18.tar.bz2
/389-ds-base-1.3.2.19.tar.bz2
/389-ds-base-1.3.2.20.tar.bz2
/389-ds-base-1.3.2.21.tar.bz2
/389-ds-base-1.3.2.22.tar.bz2
/389-ds-base-1.3.2.23.tar.bz2
/389-ds-base-1.3.3.0.tar.bz2
/389-ds-base-1.3.3.2.tar.bz2
/389-ds-base-1.3.3.3.tar.bz2
/389-ds-base-1.3.3.4.tar.bz2
/389-ds-base-1.3.3.5.tar.bz2
/389-ds-base-1.3.3.6.tar.bz2
/389-ds-base-1.3.3.7.tar.bz2
/389-ds-base-1.3.3.8.tar.bz2
/389-ds-base-1.3.3.9.tar.bz2
/389-ds-base-1.3.3.10.tar.bz2
/389-ds-base-1.3.3.11.tar.bz2
/389-ds-base-1.3.3.12.tar.bz2
/389-ds-base-1.3.4.0.tar.bz2
/nunc-stans-0.1.3.tar.bz2
/nunc-stans-0.1.4.tar.bz2
/389-ds-base-1.3.4.1.tar.bz2
/nunc-stans-0.1.5.tar.bz2
/389-ds-base-1.3.4.2.tar.bz2
/389-ds-base-1.3.4.3.tar.bz2
/389-ds-base-1.3.4.4.tar.bz2
/389-ds-base-1.3.4.5.tar.bz2
/389-ds-base-1.3.4.6.tar.bz2
/389-ds-base-1.3.4.7.tar.bz2
/389-ds-base-1.3.4.8.tar.bz2
/389-ds-base-1.3.5.0.tar.bz2
/nunc-stans-0.1.8.tar.bz2
/389-ds-base-1.3.5.1.tar.bz2
/389-ds-base-1.3.5.3.tar.bz2
/389-ds-base-1.3.5.4.tar.bz2
/389-ds-base-1.3.5.5.tar.bz2
/389-ds-base-1.3.5.6.tar.bz2
/389-ds-base-1.3.5.10.tar.bz2
/389-ds-base-1.3.5.11.tar.bz2
/389-ds-base-1.3.5.12.tar.bz2
/389-ds-base-1.3.5.13.tar.bz2
/389-ds-base-1.3.5.14.tar.bz2
/nunc-stans-0.2.0.tar.bz2
/389-ds-base-1.3.6.1.tar.bz2
/389-ds-base-1.3.6.2.tar.bz2
/389-ds-base-1.3.6.3.tar.bz2
/389-ds-base-1.3.6.4.tar.bz2
/389-ds-base-1.3.6.5.tar.bz2
/389-ds-base-1.3.6.6.tar.bz2
/389-ds-base-1.3.7.1.tar.bz2
/389-ds-base-1.3.7.2.tar.bz2
/389-ds-base-1.3.7.3.tar.bz2
/389-ds-base-1.3.7.4.tar.bz2
/389-ds-base-1.4.0.0.tar.bz2
/389-ds-base-1.4.0.1.tar.bz2
/389-ds-base-1.4.0.2.tar.bz2
/389-ds-base-1.4.0.3.tar.bz2
/389-ds-base-1.4.0.4.tar.bz2
/389-ds-base-1.4.0.5.tar.bz2
/389-ds-base-1.4.0.6.tar.bz2
/389-ds-base-1.4.0.7.tar.bz2
/389-ds-base-1.4.0.8.tar.bz2
/389-ds-base-1.4.0.9.tar.bz2
/389-ds-base-1.4.0.10.tar.bz2
/jemalloc-5.0.1.tar.bz2
/389-ds-base-1.4.0.11.tar.bz2
/jemalloc-5.1.0.tar.bz2
/389-ds-base-1.4.0.12.tar.bz2
/389-ds-base-1.4.0.13.tar.bz2
/389-ds-base-1.4.0.14.tar.bz2
/389-ds-base-1.4.0.15.tar.bz2
/389-ds-base-1.4.0.16.tar.bz2
/389-ds-base-1.4.0.17.tar.bz2
/389-ds-base-1.4.0.18.tar.bz2
/389-ds-base-1.4.0.19.tar.bz2
/389-ds-base-1.4.0.20.tar.bz2
/389-ds-base-1.4.1.1.tar.bz2
/389-ds-base-1.4.1.2.tar.bz2
/389-ds-base-1.4.1.3.tar.bz2
/389-ds-base-1.4.1.4.tar.bz2
/389-ds-base-1.4.1.5.tar.bz2
/jemalloc-5.2.0.tar.bz2
/389-ds-base-1.4.1.6.tar.bz2
/389-ds-base-1.4.2.1.tar.bz2
/389-ds-base-1.4.2.2.tar.bz2
/389-ds-base-1.4.2.3.tar.bz2
/389-ds-base-1.4.2.4.tar.bz2
/389-ds-base-1.4.2.5.tar.bz2
/389-ds-base-1.4.3.1.tar.bz2
/jemalloc-5.2.1.tar.bz2
/389-ds-base-1.4.3.2.tar.bz2
/389-ds-base-1.4.3.3.tar.bz2
/389-ds-base-1.4.3.4.tar.bz2
/389-ds-base-1.4.3.5.tar.bz2
/389-ds-base-1.4.4.0.tar.bz2
/389-ds-base-1.4.4.1.tar.bz2
/389-ds-base-1.4.4.2.tar.bz2
/389-ds-base-1.4.4.3.tar.bz2
/389-ds-base-1.4.4.4.tar.bz2
/389-ds-base-1.4.4.6.tar.bz2
/389-ds-base-1.4.5.0.tar.bz2
/389-ds-base-2.0.1.tar.bz2
/389-ds-base-2.0.2.tar.bz2
/389-ds-base-2.0.3.tar.bz2
/389-ds-base-2.0.4.tar.bz2
/389-ds-base-2.0.4.3.tar.bz2
/389-ds-base-2.0.5.tar.bz2
/389-ds-base-2.0.6.tar.bz2
/389-ds-base-2.0.7.tar.bz2
/389-ds-base-2.0.10.tar.bz2
/389-ds-base-2.0.11.tar.bz2
/389-ds-base-2.0.12.tar.bz2
/389-ds-base-2.0.13.tar.bz2
/389-ds-base-2.1.0.tar.bz2
/389-ds-base-2.2.0.tar.bz2
/389-ds-base-2.1.1.tar.bz2
/jemalloc-5.3.0.tar.bz2
/389-ds-base-2.2.1.tar.bz2
/389-ds-base-2.2.2.tar.bz2
/389-ds-base-2.3.0.tar.bz2
/389-ds-base-2.3.1.tar.bz2
/389-ds-base-2.3.2.tar.bz2
/389-ds-base-2.4.0.tar.bz2
/389-ds-base-2.4.1.tar.bz2
/389-ds-base-2.4.2.tar.bz2
/389-ds-base-2.4.3.tar.bz2
/389-ds-base-2.4.4.tar.bz2
/389-ds-base-2.4.5.tar.bz2
/389-ds-base-3.0.1.tar.bz2
/389-ds-base-3.0.2.tar.bz2
/389-ds-base-3.1.0.tar.bz2
/389-ds-base-*.tar.bz2
/jemalloc-*.tar.bz2
/libdb-5.3.28-59.tar.bz2
/389-ds-base-3.1.1.tar.bz2

View file

@ -0,0 +1,48 @@
From a2d3ba3456f59b77443085d17b36b424437fbef1 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 11 Aug 2025 13:22:52 +0200
Subject: [PATCH] Issue 5120 - Fix compilation error
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Bug Description:
Compilation fails with `-Wunused-function`:
```
ldap/servers/slapd/main.c:290:1: warning: referral_set_defaults defined but not used [-Wunused-function]
290 | referral_set_defaults(void)
| ^~~~~~~~~~~~~~~~~~~~~
make: *** [Makefile:4148: all] Error 2
```
Fix Description:
Remove unused function `referral_set_defaults`.
Fixes: https://github.com/389ds/389-ds-base/issues/5120
---
ldap/servers/slapd/main.c | 8 --------
1 file changed, 8 deletions(-)
diff --git a/ldap/servers/slapd/main.c b/ldap/servers/slapd/main.c
index 9d81d80f3..c370588e5 100644
--- a/ldap/servers/slapd/main.c
+++ b/ldap/servers/slapd/main.c
@@ -285,14 +285,6 @@ main_setuid(char *username)
return 0;
}
-/* set good defaults for front-end config in referral mode */
-static void
-referral_set_defaults(void)
-{
- char errorbuf[SLAPI_DSE_RETURNTEXT_SIZE];
- config_set_maxdescriptors(CONFIG_MAXDESCRIPTORS_ATTRIBUTE, "1024", errorbuf, 1);
-}
-
static int
name2exemode(char *progname, char *s, int exit_if_unknown)
{
--
2.49.0

View file

@ -0,0 +1,127 @@
From dcc402a3dd9a8f316388dc31da42786fbc2c1a88 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Thu, 15 May 2025 10:35:27 -0400
Subject: [PATCH] Issue 6782 - Improve paged result locking
Description:
When cleaning a slot, instead of mem setting everything to Zero and restoring
the mutex, manually reset all the values leaving the mutex pointer
intact.
There is also a deadlock possibility when checking for abandoned PR search
in opshared.c, and we were checking a flag value outside of the per_conn
lock.
Relates: https://github.com/389ds/389-ds-base/issues/6782
Reviewed by: progier & spichugi(Thanks!!)
---
ldap/servers/slapd/opshared.c | 10 +++++++++-
ldap/servers/slapd/pagedresults.c | 27 +++++++++++++++++----------
2 files changed, 26 insertions(+), 11 deletions(-)
diff --git a/ldap/servers/slapd/opshared.c b/ldap/servers/slapd/opshared.c
index 5ea919e2d..545518748 100644
--- a/ldap/servers/slapd/opshared.c
+++ b/ldap/servers/slapd/opshared.c
@@ -619,6 +619,14 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
int32_t tlimit;
slapi_pblock_get(pb, SLAPI_SEARCH_TIMELIMIT, &tlimit);
pagedresults_set_timelimit(pb_conn, operation, (time_t)tlimit, pr_idx);
+ /* When using this mutex in conjunction with the main paged
+ * result lock, you must do so in this order:
+ *
+ * --> pagedresults_lock()
+ * --> pagedresults_mutex
+ * <-- pagedresults_mutex
+ * <-- pagedresults_unlock()
+ */
pagedresults_mutex = pageresult_lock_get_addr(pb_conn);
}
@@ -744,11 +752,11 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
pr_search_result = pagedresults_get_search_result(pb_conn, operation, 1 /*locked*/, pr_idx);
if (pr_search_result) {
if (pagedresults_is_abandoned_or_notavailable(pb_conn, 1 /*locked*/, pr_idx)) {
+ pthread_mutex_unlock(pagedresults_mutex);
pagedresults_unlock(pb_conn, pr_idx);
/* Previous operation was abandoned and the simplepaged object is not in use. */
send_ldap_result(pb, 0, NULL, "Simple Paged Results Search abandoned", 0, NULL);
rc = LDAP_SUCCESS;
- pthread_mutex_unlock(pagedresults_mutex);
goto free_and_return;
} else {
slapi_pblock_set(pb, SLAPI_SEARCH_RESULT_SET, pr_search_result);
diff --git a/ldap/servers/slapd/pagedresults.c b/ldap/servers/slapd/pagedresults.c
index 642aefb3d..c3f3aae01 100644
--- a/ldap/servers/slapd/pagedresults.c
+++ b/ldap/servers/slapd/pagedresults.c
@@ -48,7 +48,6 @@ pageresult_lock_get_addr(Connection *conn)
static void
_pr_cleanup_one_slot(PagedResults *prp)
{
- PRLock *prmutex = NULL;
if (!prp) {
return;
}
@@ -56,13 +55,17 @@ _pr_cleanup_one_slot(PagedResults *prp)
/* sr is left; release it. */
prp->pr_current_be->be_search_results_release(&(prp->pr_search_result_set));
}
- /* clean up the slot */
- if (prp->pr_mutex) {
- /* pr_mutex is reused; back it up and reset it. */
- prmutex = prp->pr_mutex;
- }
- memset(prp, '\0', sizeof(PagedResults));
- prp->pr_mutex = prmutex;
+
+ /* clean up the slot except the mutex */
+ prp->pr_current_be = NULL;
+ prp->pr_search_result_set = NULL;
+ prp->pr_search_result_count = 0;
+ prp->pr_search_result_set_size_estimate = 0;
+ prp->pr_sort_result_code = 0;
+ prp->pr_timelimit_hr.tv_sec = 0;
+ prp->pr_timelimit_hr.tv_nsec = 0;
+ prp->pr_flags = 0;
+ prp->pr_msgid = 0;
}
/*
@@ -1007,7 +1010,8 @@ op_set_pagedresults(Operation *op)
/*
* pagedresults_lock/unlock -- introduced to protect search results for the
- * asynchronous searches.
+ * asynchronous searches. Do not call these functions while the PR conn lock
+ * is held (e.g. pageresult_lock_get_addr(conn))
*/
void
pagedresults_lock(Connection *conn, int index)
@@ -1045,6 +1049,8 @@ int
pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int index)
{
PagedResults *prp;
+ int32_t result;
+
if (!conn || (index < 0) || (index >= conn->c_pagedresults.prl_maxlen)) {
return 1; /* not abandoned, but do not want to proceed paged results op. */
}
@@ -1052,10 +1058,11 @@ pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int inde
pthread_mutex_lock(pageresult_lock_get_addr(conn));
}
prp = conn->c_pagedresults.prl_list + index;
+ result = prp->pr_flags & CONN_FLAG_PAGEDRESULTS_ABANDONED;
if (!locked) {
pthread_mutex_unlock(pageresult_lock_get_addr(conn));
}
- return prp->pr_flags & CONN_FLAG_PAGEDRESULTS_ABANDONED;
+ return result;
}
int
--
2.49.0

View file

@ -0,0 +1,33 @@
From 8e341b4967212454f154cd08d7ceb2e2a429e2e8 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 11 Aug 2025 13:19:13 +0200
Subject: [PATCH] Issue 6929 - Compilation failure with rust-1.89 on Fedora ELN
Bug Description:
The `ValueArrayRefIter` struct has a lifetime parameter `'a`.
But in the `iter` method the return type doesn't specify the lifetime parameter.
Fix Description:
Make the lifetime explicit.
Fixes: https://github.com/389ds/389-ds-base/issues/6929
---
src/slapi_r_plugin/src/value.rs | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/slapi_r_plugin/src/value.rs b/src/slapi_r_plugin/src/value.rs
index 2fd35c808..fec74ac25 100644
--- a/src/slapi_r_plugin/src/value.rs
+++ b/src/slapi_r_plugin/src/value.rs
@@ -61,7 +61,7 @@ impl ValueArrayRef {
ValueArrayRef { raw_slapi_val }
}
- pub fn iter(&self) -> ValueArrayRefIter {
+ pub fn iter(&self) -> ValueArrayRefIter<'_> {
ValueArrayRefIter {
idx: 0,
va_ref: &self,
--
2.49.0

View file

@ -0,0 +1,488 @@
From 388d5ef9b64208db26373fc3b1b296a82ea689ba Mon Sep 17 00:00:00 2001
From: Simon Pichugin <spichugi@redhat.com>
Date: Fri, 27 Jun 2025 18:43:39 -0700
Subject: [PATCH] Issue 6822 - Backend creation cleanup and Database UI tab
error handling (#6823)
Description: Add rollback functionality when mapping tree creation fails
during backend creation to prevent orphaned backends.
Improve error handling in Database, Replication and Monitoring UI tabs
to gracefully handle backend get-tree command failures.
Fixes: https://github.com/389ds/389-ds-base/issues/6822
Reviewed by: @mreynolds389 (Thanks!)
---
src/cockpit/389-console/src/database.jsx | 119 ++++++++------
src/cockpit/389-console/src/monitor.jsx | 172 +++++++++++---------
src/cockpit/389-console/src/replication.jsx | 55 ++++---
src/lib389/lib389/backend.py | 18 +-
4 files changed, 210 insertions(+), 154 deletions(-)
diff --git a/src/cockpit/389-console/src/database.jsx b/src/cockpit/389-console/src/database.jsx
index c0c4be414..276125dfc 100644
--- a/src/cockpit/389-console/src/database.jsx
+++ b/src/cockpit/389-console/src/database.jsx
@@ -478,6 +478,59 @@ export class Database extends React.Component {
}
loadSuffixTree(fullReset) {
+ const treeData = [
+ {
+ name: _("Global Database Configuration"),
+ icon: <CogIcon />,
+ id: "dbconfig",
+ },
+ {
+ name: _("Chaining Configuration"),
+ icon: <ExternalLinkAltIcon />,
+ id: "chaining-config",
+ },
+ {
+ name: _("Backups & LDIFs"),
+ icon: <CopyIcon />,
+ id: "backups",
+ },
+ {
+ name: _("Password Policies"),
+ id: "pwp",
+ icon: <KeyIcon />,
+ children: [
+ {
+ name: _("Global Policy"),
+ icon: <HomeIcon />,
+ id: "pwpolicy",
+ },
+ {
+ name: _("Local Policies"),
+ icon: <UsersIcon />,
+ id: "localpwpolicy",
+ },
+ ],
+ defaultExpanded: true
+ },
+ {
+ name: _("Suffixes"),
+ icon: <CatalogIcon />,
+ id: "suffixes-tree",
+ children: [],
+ defaultExpanded: true,
+ action: (
+ <Button
+ onClick={this.handleShowSuffixModal}
+ variant="plain"
+ aria-label="Create new suffix"
+ title={_("Create new suffix")}
+ >
+ <PlusIcon />
+ </Button>
+ ),
+ }
+ ];
+
const cmd = [
"dsconf", "-j", "ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket",
"backend", "get-tree",
@@ -491,58 +544,20 @@ export class Database extends React.Component {
suffixData = JSON.parse(content);
this.processTree(suffixData);
}
- const treeData = [
- {
- name: _("Global Database Configuration"),
- icon: <CogIcon />,
- id: "dbconfig",
- },
- {
- name: _("Chaining Configuration"),
- icon: <ExternalLinkAltIcon />,
- id: "chaining-config",
- },
- {
- name: _("Backups & LDIFs"),
- icon: <CopyIcon />,
- id: "backups",
- },
- {
- name: _("Password Policies"),
- id: "pwp",
- icon: <KeyIcon />,
- children: [
- {
- name: _("Global Policy"),
- icon: <HomeIcon />,
- id: "pwpolicy",
- },
- {
- name: _("Local Policies"),
- icon: <UsersIcon />,
- id: "localpwpolicy",
- },
- ],
- defaultExpanded: true
- },
- {
- name: _("Suffixes"),
- icon: <CatalogIcon />,
- id: "suffixes-tree",
- children: suffixData,
- defaultExpanded: true,
- action: (
- <Button
- onClick={this.handleShowSuffixModal}
- variant="plain"
- aria-label="Create new suffix"
- title={_("Create new suffix")}
- >
- <PlusIcon />
- </Button>
- ),
- }
- ];
+
+ let current_node = this.state.node_name;
+ if (fullReset) {
+ current_node = DB_CONFIG;
+ }
+
+ treeData[4].children = suffixData; // suffixes node
+ this.setState(() => ({
+ nodes: treeData,
+ node_name: current_node,
+ }), this.loadAttrs);
+ })
+ .fail(err => {
+ // Handle backend get-tree failure gracefully
let current_node = this.state.node_name;
if (fullReset) {
current_node = DB_CONFIG;
diff --git a/src/cockpit/389-console/src/monitor.jsx b/src/cockpit/389-console/src/monitor.jsx
index ad48d1f87..91a8e3e37 100644
--- a/src/cockpit/389-console/src/monitor.jsx
+++ b/src/cockpit/389-console/src/monitor.jsx
@@ -200,6 +200,84 @@ export class Monitor extends React.Component {
}
loadSuffixTree(fullReset) {
+ const basicData = [
+ {
+ name: _("Server Statistics"),
+ icon: <ClusterIcon />,
+ id: "server-monitor",
+ type: "server",
+ },
+ {
+ name: _("Replication"),
+ icon: <TopologyIcon />,
+ id: "replication-monitor",
+ type: "replication",
+ defaultExpanded: true,
+ children: [
+ {
+ name: _("Synchronization Report"),
+ icon: <MonitoringIcon />,
+ id: "sync-report",
+ item: "sync-report",
+ type: "repl-mon",
+ },
+ {
+ name: _("Log Analysis"),
+ icon: <MonitoringIcon />,
+ id: "log-analysis",
+ item: "log-analysis",
+ type: "repl-mon",
+ }
+ ],
+ },
+ {
+ name: _("Database"),
+ icon: <DatabaseIcon />,
+ id: "database-monitor",
+ type: "database",
+ children: [], // Will be populated with treeData on success
+ defaultExpanded: true,
+ },
+ {
+ name: _("Logging"),
+ icon: <CatalogIcon />,
+ id: "log-monitor",
+ defaultExpanded: true,
+ children: [
+ {
+ name: _("Access Log"),
+ icon: <BookIcon size="sm" />,
+ id: "access-log-monitor",
+ type: "log",
+ },
+ {
+ name: _("Audit Log"),
+ icon: <BookIcon size="sm" />,
+ id: "audit-log-monitor",
+ type: "log",
+ },
+ {
+ name: _("Audit Failure Log"),
+ icon: <BookIcon size="sm" />,
+ id: "auditfail-log-monitor",
+ type: "log",
+ },
+ {
+ name: _("Errors Log"),
+ icon: <BookIcon size="sm" />,
+ id: "error-log-monitor",
+ type: "log",
+ },
+ {
+ name: _("Security Log"),
+ icon: <BookIcon size="sm" />,
+ id: "security-log-monitor",
+ type: "log",
+ },
+ ]
+ },
+ ];
+
const cmd = [
"dsconf", "-j", "ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket",
"backend", "get-tree",
@@ -210,83 +288,7 @@ export class Monitor extends React.Component {
.done(content => {
const treeData = JSON.parse(content);
this.processTree(treeData);
- const basicData = [
- {
- name: _("Server Statistics"),
- icon: <ClusterIcon />,
- id: "server-monitor",
- type: "server",
- },
- {
- name: _("Replication"),
- icon: <TopologyIcon />,
- id: "replication-monitor",
- type: "replication",
- defaultExpanded: true,
- children: [
- {
- name: _("Synchronization Report"),
- icon: <MonitoringIcon />,
- id: "sync-report",
- item: "sync-report",
- type: "repl-mon",
- },
- {
- name: _("Log Analysis"),
- icon: <MonitoringIcon />,
- id: "log-analysis",
- item: "log-analysis",
- type: "repl-mon",
- }
- ],
- },
- {
- name: _("Database"),
- icon: <DatabaseIcon />,
- id: "database-monitor",
- type: "database",
- children: [],
- defaultExpanded: true,
- },
- {
- name: _("Logging"),
- icon: <CatalogIcon />,
- id: "log-monitor",
- defaultExpanded: true,
- children: [
- {
- name: _("Access Log"),
- icon: <BookIcon size="sm" />,
- id: "access-log-monitor",
- type: "log",
- },
- {
- name: _("Audit Log"),
- icon: <BookIcon size="sm" />,
- id: "audit-log-monitor",
- type: "log",
- },
- {
- name: _("Audit Failure Log"),
- icon: <BookIcon size="sm" />,
- id: "auditfail-log-monitor",
- type: "log",
- },
- {
- name: _("Errors Log"),
- icon: <BookIcon size="sm" />,
- id: "error-log-monitor",
- type: "log",
- },
- {
- name: _("Security Log"),
- icon: <BookIcon size="sm" />,
- id: "security-log-monitor",
- type: "log",
- },
- ]
- },
- ];
+
let current_node = this.state.node_name;
let type = this.state.node_type;
if (fullReset) {
@@ -296,6 +298,22 @@ export class Monitor extends React.Component {
basicData[2].children = treeData; // database node
this.processReplSuffixes(basicData[1].children);
+ this.setState(() => ({
+ nodes: basicData,
+ node_name: current_node,
+ node_type: type,
+ }), this.update_tree_nodes);
+ })
+ .fail(err => {
+ // Handle backend get-tree failure gracefully
+ let current_node = this.state.node_name;
+ let type = this.state.node_type;
+ if (fullReset) {
+ current_node = "server-monitor";
+ type = "server";
+ }
+ this.processReplSuffixes(basicData[1].children);
+
this.setState(() => ({
nodes: basicData,
node_name: current_node,
diff --git a/src/cockpit/389-console/src/replication.jsx b/src/cockpit/389-console/src/replication.jsx
index fa492fd2a..aa535bfc7 100644
--- a/src/cockpit/389-console/src/replication.jsx
+++ b/src/cockpit/389-console/src/replication.jsx
@@ -177,6 +177,16 @@ export class Replication extends React.Component {
loaded: false
});
+ const basicData = [
+ {
+ name: _("Suffixes"),
+ icon: <TopologyIcon />,
+ id: "repl-suffixes",
+ children: [],
+ defaultExpanded: true
+ }
+ ];
+
const cmd = [
"dsconf", "-j", "ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket",
"backend", "get-tree",
@@ -199,15 +209,7 @@ export class Replication extends React.Component {
}
}
}
- const basicData = [
- {
- name: _("Suffixes"),
- icon: <TopologyIcon />,
- id: "repl-suffixes",
- children: [],
- defaultExpanded: true
- }
- ];
+
let current_node = this.state.node_name;
let current_type = this.state.node_type;
let replicated = this.state.node_replicated;
@@ -258,6 +260,19 @@ export class Replication extends React.Component {
}
basicData[0].children = treeData;
+ this.setState({
+ nodes: basicData,
+ node_name: current_node,
+ node_type: current_type,
+ node_replicated: replicated,
+ }, () => { this.update_tree_nodes() });
+ })
+ .fail(err => {
+ // Handle backend get-tree failure gracefully
+ let current_node = this.state.node_name;
+ let current_type = this.state.node_type;
+ let replicated = this.state.node_replicated;
+
this.setState({
nodes: basicData,
node_name: current_node,
@@ -905,18 +920,18 @@ export class Replication extends React.Component {
disableTree: false
});
});
- })
- .fail(err => {
- const errMsg = JSON.parse(err);
- this.props.addNotification(
- "error",
- cockpit.format(_("Error loading replication agreements configuration - $0"), errMsg.desc)
- );
- this.setState({
- suffixLoading: false,
- disableTree: false
+ })
+ .fail(err => {
+ const errMsg = JSON.parse(err);
+ this.props.addNotification(
+ "error",
+ cockpit.format(_("Error loading replication agreements configuration - $0"), errMsg.desc)
+ );
+ this.setState({
+ suffixLoading: false,
+ disableTree: false
+ });
});
- });
})
.fail(err => {
// changelog failure
diff --git a/src/lib389/lib389/backend.py b/src/lib389/lib389/backend.py
index 1d000ed66..53f15b6b0 100644
--- a/src/lib389/lib389/backend.py
+++ b/src/lib389/lib389/backend.py
@@ -694,24 +694,32 @@ class Backend(DSLdapObject):
parent_suffix = properties.pop('parent', False)
# Okay, now try to make the backend.
- super(Backend, self).create(dn, properties, basedn)
+ backend_obj = super(Backend, self).create(dn, properties, basedn)
# We check if the mapping tree exists in create, so do this *after*
if create_mapping_tree is True:
- properties = {
+ mapping_tree_properties = {
'cn': self._nprops_stash['nsslapd-suffix'],
'nsslapd-state': 'backend',
'nsslapd-backend': self._nprops_stash['cn'],
}
if parent_suffix:
# This is a subsuffix, set the parent suffix
- properties['nsslapd-parent-suffix'] = parent_suffix
- self._mts.create(properties=properties)
+ mapping_tree_properties['nsslapd-parent-suffix'] = parent_suffix
+
+ try:
+ self._mts.create(properties=mapping_tree_properties)
+ except Exception as e:
+ try:
+ backend_obj.delete()
+ except Exception as cleanup_error:
+ self._instance.log.error(f"Failed to cleanup backend after mapping tree creation failure: {cleanup_error}")
+ raise e
# We can't create the sample entries unless a mapping tree was installed.
if sample_entries is not False and create_mapping_tree is True:
self.create_sample_entries(sample_entries)
- return self
+ return backend_obj
def delete(self):
"""Deletes the backend, it's mapping tree and all related indices.
--
2.49.0

View file

@ -0,0 +1,515 @@
From 9da3349d53f4073740ddb1aca97713e13cb40cd0 Mon Sep 17 00:00:00 2001
From: Lenka Doudova <lryznaro@redhat.com>
Date: Mon, 9 Jun 2025 15:15:04 +0200
Subject: [PATCH] Issue 6753 - Add 'add_exclude_subtree' and
'remove_exclude_subtree' methods to Attribute uniqueness plugin
Description:
Adding 'add_exclude_subtree' and 'remove_exclude_subtree' methods to AttributeUniquenessPlugin in
order to be able to easily add or remove an exclude subtree.
Porting ticket 47927 test to
dirsrvtests/tests/suites/plugins/attruniq_test.py
Relates: #6753
Author: Lenka Doudova
Reviewers: Simon Pichugin, Mark Reynolds
---
.../tests/suites/plugins/attruniq_test.py | 171 +++++++++++
dirsrvtests/tests/tickets/ticket47927_test.py | 267 ------------------
src/lib389/lib389/plugins.py | 10 +
3 files changed, 181 insertions(+), 267 deletions(-)
delete mode 100644 dirsrvtests/tests/tickets/ticket47927_test.py
diff --git a/dirsrvtests/tests/suites/plugins/attruniq_test.py b/dirsrvtests/tests/suites/plugins/attruniq_test.py
index c1ccad9ae..aac659c29 100644
--- a/dirsrvtests/tests/suites/plugins/attruniq_test.py
+++ b/dirsrvtests/tests/suites/plugins/attruniq_test.py
@@ -10,6 +10,7 @@ import pytest
import ldap
import logging
from lib389.plugins import AttributeUniquenessPlugin
+from lib389.idm.nscontainer import nsContainers
from lib389.idm.user import UserAccounts
from lib389.idm.group import Groups
from lib389._constants import DEFAULT_SUFFIX
@@ -22,6 +23,19 @@ log = logging.getLogger(__name__)
MAIL_ATTR_VALUE = 'non-uniq@value.net'
MAIL_ATTR_VALUE_ALT = 'alt-mail@value.net'
+EXCLUDED_CONTAINER_CN = "excluded_container"
+EXCLUDED_CONTAINER_DN = "cn={},{}".format(EXCLUDED_CONTAINER_CN, DEFAULT_SUFFIX)
+
+EXCLUDED_BIS_CONTAINER_CN = "excluded_bis_container"
+EXCLUDED_BIS_CONTAINER_DN = "cn={},{}".format(EXCLUDED_BIS_CONTAINER_CN, DEFAULT_SUFFIX)
+
+ENFORCED_CONTAINER_CN = "enforced_container"
+
+USER_1_CN = "test_1"
+USER_2_CN = "test_2"
+USER_3_CN = "test_3"
+USER_4_CN = "test_4"
+
def test_modrdn_attr_uniqueness(topology_st):
"""Test that we can not add two entries that have the same attr value that is
@@ -154,3 +168,160 @@ def test_multiple_attr_uniqueness(topology_st):
testuser2.delete()
attruniq.disable()
attruniq.delete()
+
+
+def test_exclude_subtrees(topology_st):
+ """ Test attribute uniqueness with exclude scope
+
+ :id: 43d29a60-40e1-4ebd-b897-6ef9f20e9f27
+ :setup: Standalone instance
+ :steps:
+ 1. Setup and enable attribute uniqueness plugin for telephonenumber unique attribute
+ 2. Create subtrees and test users
+ 3. Add a unique attribute to a user within uniqueness scope
+ 4. Add exclude subtree
+ 5. Try to add existing value attribute to an entry within uniqueness scope
+ 6. Try to add existing value attribute to an entry within exclude scope
+ 7. Remove the attribute from affected entries
+ 8. Add a unique attribute to a user within exclude scope
+ 9. Try to add existing value attribute to an entry within uniqueness scope
+ 10. Try to add existing value attribute to another entry within uniqueness scope
+ 11. Remove the attribute from affected entries
+ 12. Add another exclude subtree
+ 13. Add a unique attribute to a user within uniqueness scope
+ 14. Try to add existing value attribute to an entry within uniqueness scope
+ 15. Try to add existing value attribute to an entry within exclude scope
+ 16. Try to add existing value attribute to an entry within another exclude scope
+ 17. Clean up entries
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. Success
+ 5. Should raise CONSTRAINT_VIOLATION
+ 6. Success
+ 7. Success
+ 8. Success
+ 9. Success
+ 10. Should raise CONSTRAINT_VIOLATION
+ 11. Success
+ 12. Success
+ 13. Success
+ 14. Should raise CONSTRAINT_VIOLATION
+ 15. Success
+ 16. Success
+ 17. Success
+ """
+ log.info('Setup attribute uniqueness plugin')
+ attruniq = AttributeUniquenessPlugin(topology_st.standalone, dn="cn=attruniq,cn=plugins,cn=config")
+ attruniq.create(properties={'cn': 'attruniq'})
+ attruniq.add_unique_attribute('telephonenumber')
+ attruniq.add_unique_subtree(DEFAULT_SUFFIX)
+ attruniq.enable_all_subtrees()
+ attruniq.enable()
+ topology_st.standalone.restart()
+
+ log.info('Create subtrees container')
+ containers = nsContainers(topology_st.standalone, DEFAULT_SUFFIX)
+ cont1 = containers.create(properties={'cn': EXCLUDED_CONTAINER_CN})
+ cont2 = containers.create(properties={'cn': EXCLUDED_BIS_CONTAINER_CN})
+ cont3 = containers.create(properties={'cn': ENFORCED_CONTAINER_CN})
+
+ log.info('Create test users')
+ users = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX,
+ rdn='cn={}'.format(ENFORCED_CONTAINER_CN))
+ users_excluded = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX,
+ rdn='cn={}'.format(EXCLUDED_CONTAINER_CN))
+ users_excluded2 = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX,
+ rdn='cn={}'.format(EXCLUDED_BIS_CONTAINER_CN))
+
+ user1 = users.create(properties={'cn': USER_1_CN,
+ 'uid': USER_1_CN,
+ 'sn': USER_1_CN,
+ 'uidNumber': '1',
+ 'gidNumber': '11',
+ 'homeDirectory': '/home/{}'.format(USER_1_CN)})
+ user2 = users.create(properties={'cn': USER_2_CN,
+ 'uid': USER_2_CN,
+ 'sn': USER_2_CN,
+ 'uidNumber': '2',
+ 'gidNumber': '22',
+ 'homeDirectory': '/home/{}'.format(USER_2_CN)})
+ user3 = users_excluded.create(properties={'cn': USER_3_CN,
+ 'uid': USER_3_CN,
+ 'sn': USER_3_CN,
+ 'uidNumber': '3',
+ 'gidNumber': '33',
+ 'homeDirectory': '/home/{}'.format(USER_3_CN)})
+ user4 = users_excluded2.create(properties={'cn': USER_4_CN,
+ 'uid': USER_4_CN,
+ 'sn': USER_4_CN,
+ 'uidNumber': '4',
+ 'gidNumber': '44',
+ 'homeDirectory': '/home/{}'.format(USER_4_CN)})
+
+ UNIQUE_VALUE = '1234'
+
+ try:
+ log.info('Create user with unique attribute')
+ user1.add('telephonenumber', UNIQUE_VALUE)
+ assert user1.present('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Add exclude subtree')
+ attruniq.add_exclude_subtree(EXCLUDED_CONTAINER_DN)
+ topology_st.standalone.restart()
+
+ log.info('Verify an already used attribute value cannot be added within the same subtree')
+ with pytest.raises(ldap.CONSTRAINT_VIOLATION):
+ user2.add('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Verify an entry with same attribute value can be added within exclude subtree')
+ user3.add('telephonenumber', UNIQUE_VALUE)
+ assert user3.present('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Cleanup unique attribute values')
+ user1.remove_all('telephonenumber')
+ user3.remove_all('telephonenumber')
+
+ log.info('Add a unique value to an entry in excluded scope')
+ user3.add('telephonenumber', UNIQUE_VALUE)
+ assert user3.present('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Verify the same value can be added to an entry within uniqueness scope')
+ user1.add('telephonenumber', UNIQUE_VALUE)
+ assert user1.present('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Verify that yet another same value cannot be added to another entry within uniqueness scope')
+ with pytest.raises(ldap.CONSTRAINT_VIOLATION):
+ user2.add('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Cleanup unique attribute values')
+ user1.remove_all('telephonenumber')
+ user3.remove_all('telephonenumber')
+
+ log.info('Add another exclude subtree')
+ attruniq.add_exclude_subtree(EXCLUDED_BIS_CONTAINER_DN)
+ topology_st.standalone.restart()
+
+ user1.add('telephonenumber', UNIQUE_VALUE)
+ log.info('Verify an already used attribute value cannot be added within the same subtree')
+ with pytest.raises(ldap.CONSTRAINT_VIOLATION):
+ user2.add('telephonenumber', UNIQUE_VALUE)
+
+ log.info('Verify an already used attribute can be added to an entry in exclude scope')
+ user3.add('telephonenumber', UNIQUE_VALUE)
+ assert user3.present('telephonenumber', UNIQUE_VALUE)
+ user4.add('telephonenumber', UNIQUE_VALUE)
+ assert user4.present('telephonenumber', UNIQUE_VALUE)
+
+ finally:
+ log.info('Clean up users, containers and attribute uniqueness plugin')
+ user1.delete()
+ user2.delete()
+ user3.delete()
+ user4.delete()
+ cont1.delete()
+ cont2.delete()
+ cont3.delete()
+ attruniq.disable()
+ attruniq.delete()
\ No newline at end of file
diff --git a/dirsrvtests/tests/tickets/ticket47927_test.py b/dirsrvtests/tests/tickets/ticket47927_test.py
deleted file mode 100644
index 887fe1af4..000000000
--- a/dirsrvtests/tests/tickets/ticket47927_test.py
+++ /dev/null
@@ -1,267 +0,0 @@
-# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2016 Red Hat, Inc.
-# All rights reserved.
-#
-# License: GPL (version 3 or any later version).
-# See LICENSE for details.
-# --- END COPYRIGHT BLOCK ---
-#
-import pytest
-from lib389.tasks import *
-from lib389.utils import *
-from lib389.topologies import topology_st
-
-from lib389._constants import SUFFIX, DEFAULT_SUFFIX, PLUGIN_ATTR_UNIQUENESS
-
-# Skip on older versions
-pytestmark = [pytest.mark.tier2,
- pytest.mark.skipif(ds_is_older('1.3.4'), reason="Not implemented")]
-
-logging.getLogger(__name__).setLevel(logging.DEBUG)
-log = logging.getLogger(__name__)
-
-EXCLUDED_CONTAINER_CN = "excluded_container"
-EXCLUDED_CONTAINER_DN = "cn=%s,%s" % (EXCLUDED_CONTAINER_CN, SUFFIX)
-
-EXCLUDED_BIS_CONTAINER_CN = "excluded_bis_container"
-EXCLUDED_BIS_CONTAINER_DN = "cn=%s,%s" % (EXCLUDED_BIS_CONTAINER_CN, SUFFIX)
-
-ENFORCED_CONTAINER_CN = "enforced_container"
-ENFORCED_CONTAINER_DN = "cn=%s,%s" % (ENFORCED_CONTAINER_CN, SUFFIX)
-
-USER_1_CN = "test_1"
-USER_1_DN = "cn=%s,%s" % (USER_1_CN, ENFORCED_CONTAINER_DN)
-USER_2_CN = "test_2"
-USER_2_DN = "cn=%s,%s" % (USER_2_CN, ENFORCED_CONTAINER_DN)
-USER_3_CN = "test_3"
-USER_3_DN = "cn=%s,%s" % (USER_3_CN, EXCLUDED_CONTAINER_DN)
-USER_4_CN = "test_4"
-USER_4_DN = "cn=%s,%s" % (USER_4_CN, EXCLUDED_BIS_CONTAINER_DN)
-
-
-def test_ticket47927_init(topology_st):
- topology_st.standalone.plugins.enable(name=PLUGIN_ATTR_UNIQUENESS)
- try:
- topology_st.standalone.modify_s('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config',
- [(ldap.MOD_REPLACE, 'uniqueness-attribute-name', b'telephonenumber'),
- (ldap.MOD_REPLACE, 'uniqueness-subtrees', ensure_bytes(DEFAULT_SUFFIX)),
- ])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927: Failed to configure plugin for "telephonenumber": error ' + e.args[0]['desc'])
- assert False
- topology_st.standalone.restart(timeout=120)
-
- topology_st.standalone.add_s(Entry((EXCLUDED_CONTAINER_DN, {'objectclass': "top nscontainer".split(),
- 'cn': EXCLUDED_CONTAINER_CN})))
- topology_st.standalone.add_s(Entry((EXCLUDED_BIS_CONTAINER_DN, {'objectclass': "top nscontainer".split(),
- 'cn': EXCLUDED_BIS_CONTAINER_CN})))
- topology_st.standalone.add_s(Entry((ENFORCED_CONTAINER_DN, {'objectclass': "top nscontainer".split(),
- 'cn': ENFORCED_CONTAINER_CN})))
-
- # adding an entry on a stage with a different 'cn'
- topology_st.standalone.add_s(Entry((USER_1_DN, {
- 'objectclass': "top person".split(),
- 'sn': USER_1_CN,
- 'cn': USER_1_CN})))
- # adding an entry on a stage with a different 'cn'
- topology_st.standalone.add_s(Entry((USER_2_DN, {
- 'objectclass': "top person".split(),
- 'sn': USER_2_CN,
- 'cn': USER_2_CN})))
- topology_st.standalone.add_s(Entry((USER_3_DN, {
- 'objectclass': "top person".split(),
- 'sn': USER_3_CN,
- 'cn': USER_3_CN})))
- topology_st.standalone.add_s(Entry((USER_4_DN, {
- 'objectclass': "top person".split(),
- 'sn': USER_4_CN,
- 'cn': USER_4_CN})))
-
-
-def test_ticket47927_one(topology_st):
- '''
- Check that uniqueness is enforce on all SUFFIX
- '''
- UNIQUE_VALUE = b'1234'
- try:
- topology_st.standalone.modify_s(USER_1_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_one: Failed to set the telephonenumber for %s: %s' % (USER_1_DN, e.args[0]['desc']))
- assert False
-
- # we expect to fail because user1 is in the scope of the plugin
- try:
- topology_st.standalone.modify_s(USER_2_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_one: unexpected success to set the telephonenumber for %s' % (USER_2_DN))
- assert False
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_one: Failed (expected) to set the telephonenumber for %s: %s' % (
- USER_2_DN, e.args[0]['desc']))
- pass
-
- # we expect to fail because user1 is in the scope of the plugin
- try:
- topology_st.standalone.modify_s(USER_3_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_one: unexpected success to set the telephonenumber for %s' % (USER_3_DN))
- assert False
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_one: Failed (expected) to set the telephonenumber for %s: %s' % (
- USER_3_DN, e.args[0]['desc']))
- pass
-
-
-def test_ticket47927_two(topology_st):
- '''
- Exclude the EXCLUDED_CONTAINER_DN from the uniqueness plugin
- '''
- try:
- topology_st.standalone.modify_s('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config',
- [(ldap.MOD_REPLACE, 'uniqueness-exclude-subtrees', ensure_bytes(EXCLUDED_CONTAINER_DN))])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_two: Failed to configure plugin for to exclude %s: error %s' % (
- EXCLUDED_CONTAINER_DN, e.args[0]['desc']))
- assert False
- topology_st.standalone.restart(timeout=120)
-
-
-def test_ticket47927_three(topology_st):
- '''
- Check that uniqueness is enforced on full SUFFIX except EXCLUDED_CONTAINER_DN
- First case: it exists an entry (with the same attribute value) in the scope
- of the plugin and we set the value in an entry that is in an excluded scope
- '''
- UNIQUE_VALUE = b'9876'
- try:
- topology_st.standalone.modify_s(USER_1_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_three: Failed to set the telephonenumber ' + e.args[0]['desc'])
- assert False
-
- # we should not be allowed to set this value (because user1 is in the scope)
- try:
- topology_st.standalone.modify_s(USER_2_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_three: unexpected success to set the telephonenumber for %s' % (USER_2_DN))
- assert False
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_three: Failed (expected) to set the telephonenumber for %s: %s' % (
- USER_2_DN, e.args[0]['desc']))
-
- # USER_3_DN is in EXCLUDED_CONTAINER_DN so update should be successful
- try:
- topology_st.standalone.modify_s(USER_3_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_three: success to set the telephonenumber for %s' % (USER_3_DN))
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_three: Failed (unexpected) to set the telephonenumber for %s: %s' % (
- USER_3_DN, e.args[0]['desc']))
- assert False
-
-
-def test_ticket47927_four(topology_st):
- '''
- Check that uniqueness is enforced on full SUFFIX except EXCLUDED_CONTAINER_DN
- Second case: it exists an entry (with the same attribute value) in an excluded scope
- of the plugin and we set the value in an entry is in the scope
- '''
- UNIQUE_VALUE = b'1111'
- # USER_3_DN is in EXCLUDED_CONTAINER_DN so update should be successful
- try:
- topology_st.standalone.modify_s(USER_3_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_four: success to set the telephonenumber for %s' % USER_3_DN)
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_four: Failed (unexpected) to set the telephonenumber for %s: %s' % (
- USER_3_DN, e.args[0]['desc']))
- assert False
-
- # we should be allowed to set this value (because user3 is excluded from scope)
- try:
- topology_st.standalone.modify_s(USER_1_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- except ldap.LDAPError as e:
- log.fatal(
- 'test_ticket47927_four: Failed to set the telephonenumber for %s: %s' % (USER_1_DN, e.args[0]['desc']))
- assert False
-
- # we should not be allowed to set this value (because user1 is in the scope)
- try:
- topology_st.standalone.modify_s(USER_2_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_four: unexpected success to set the telephonenumber %s' % USER_2_DN)
- assert False
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_four: Failed (expected) to set the telephonenumber for %s: %s' % (
- USER_2_DN, e.args[0]['desc']))
- pass
-
-
-def test_ticket47927_five(topology_st):
- '''
- Exclude the EXCLUDED_BIS_CONTAINER_DN from the uniqueness plugin
- '''
- try:
- topology_st.standalone.modify_s('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config',
- [(ldap.MOD_ADD, 'uniqueness-exclude-subtrees', ensure_bytes(EXCLUDED_BIS_CONTAINER_DN))])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_five: Failed to configure plugin for to exclude %s: error %s' % (
- EXCLUDED_BIS_CONTAINER_DN, e.args[0]['desc']))
- assert False
- topology_st.standalone.restart(timeout=120)
- topology_st.standalone.getEntry('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config', ldap.SCOPE_BASE)
-
-
-def test_ticket47927_six(topology_st):
- '''
- Check that uniqueness is enforced on full SUFFIX except EXCLUDED_CONTAINER_DN
- and EXCLUDED_BIS_CONTAINER_DN
- First case: it exists an entry (with the same attribute value) in the scope
- of the plugin and we set the value in an entry that is in an excluded scope
- '''
- UNIQUE_VALUE = b'222'
- try:
- topology_st.standalone.modify_s(USER_1_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_six: Failed to set the telephonenumber ' + e.args[0]['desc'])
- assert False
-
- # we should not be allowed to set this value (because user1 is in the scope)
- try:
- topology_st.standalone.modify_s(USER_2_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_six: unexpected success to set the telephonenumber for %s' % (USER_2_DN))
- assert False
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_six: Failed (expected) to set the telephonenumber for %s: %s' % (
- USER_2_DN, e.args[0]['desc']))
-
- # USER_3_DN is in EXCLUDED_CONTAINER_DN so update should be successful
- try:
- topology_st.standalone.modify_s(USER_3_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_six: success to set the telephonenumber for %s' % (USER_3_DN))
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_six: Failed (unexpected) to set the telephonenumber for %s: %s' % (
- USER_3_DN, e.args[0]['desc']))
- assert False
- # USER_4_DN is in EXCLUDED_CONTAINER_DN so update should be successful
- try:
- topology_st.standalone.modify_s(USER_4_DN,
- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)])
- log.fatal('test_ticket47927_six: success to set the telephonenumber for %s' % (USER_4_DN))
- except ldap.LDAPError as e:
- log.fatal('test_ticket47927_six: Failed (unexpected) to set the telephonenumber for %s: %s' % (
- USER_4_DN, e.args[0]['desc']))
- assert False
-
-
-if __name__ == '__main__':
- # Run isolated
- # -s for DEBUG mode
- CURRENT_FILE = os.path.realpath(__file__)
- pytest.main("-s %s" % CURRENT_FILE)
diff --git a/src/lib389/lib389/plugins.py b/src/lib389/lib389/plugins.py
index 31bbfa502..977091726 100644
--- a/src/lib389/lib389/plugins.py
+++ b/src/lib389/lib389/plugins.py
@@ -175,6 +175,16 @@ class AttributeUniquenessPlugin(Plugin):
self.set('uniqueness-across-all-subtrees', 'off')
+ def add_exclude_subtree(self, basedn):
+ """Add a uniqueness-exclude-subtrees attribute"""
+
+ self.add('uniqueness-exclude-subtrees', basedn)
+
+ def remove_exclude_subtree(self, basedn):
+ """Remove a uniqueness-exclude-subtrees attribute"""
+
+ self.remove('uniqueness-exclude-subtrees', basedn)
+
class AttributeUniquenessPlugins(DSLdapObjects):
"""A DSLdapObjects entity which represents Attribute Uniqueness plugin instances
--
2.49.0

View file

@ -0,0 +1,45 @@
From 4fb3a2ea084c1de3ba60ca97a5dd14fc7b8225bd Mon Sep 17 00:00:00 2001
From: Alexander Bokovoy <abokovoy@redhat.com>
Date: Wed, 9 Jul 2025 12:08:09 +0300
Subject: [PATCH] Issue 6857 - uiduniq: allow specifying match rules in the
filter
Allow uniqueness plugin to work with attributes where uniqueness should
be enforced using different matching rule than the one defined for the
attribute itself.
Since uniqueness plugin configuration can contain multiple attributes,
add matching rule right to the attribute as it is used in the LDAP rule
(e.g. 'attribute:caseIgnoreMatch:' to force 'attribute' to be searched
with case-insensitive matching rule instead of the original matching
rule.
Fixes: https://github.com/389ds/389-ds-base/issues/6857
Signed-off-by: Alexander Bokovoy <abokovoy@redhat.com>
---
ldap/servers/plugins/uiduniq/uid.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/ldap/servers/plugins/uiduniq/uid.c b/ldap/servers/plugins/uiduniq/uid.c
index 053af4f9d..887e79d78 100644
--- a/ldap/servers/plugins/uiduniq/uid.c
+++ b/ldap/servers/plugins/uiduniq/uid.c
@@ -1030,7 +1030,14 @@ preop_add(Slapi_PBlock *pb)
}
for (i = 0; attrNames && attrNames[i]; i++) {
+ char *attr_match = strchr(attrNames[i], ':');
+ if (attr_match != NULL) {
+ attr_match[0] = '\0';
+ }
err = slapi_entry_attr_find(e, attrNames[i], &attr);
+ if (attr_match != NULL) {
+ attr_match[0] = ':';
+ }
if (!err) {
/*
* Passed all the requirements - this is an operation we
--
2.49.0

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,65 @@
From 9d851a63c9f714ba896a90119560246bf49a433c Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 7 Jul 2025 23:11:17 +0200
Subject: [PATCH] Issue 6850 - AddressSanitizer: memory leak in mdb_init
Bug Description:
`dbmdb_componentid` can be allocated multiple times. To avoid a memory
leak, allocate it only once, and free at the cleanup.
Fixes: https://github.com/389ds/389-ds-base/issues/6850
Reviewed by: @mreynolds389, @tbordaz (Tnanks!)
---
ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c | 4 +++-
ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c | 2 +-
ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c | 5 +++++
3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c
index 447f3c70a..54ca03b0b 100644
--- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c
+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c
@@ -146,7 +146,9 @@ dbmdb_compute_limits(struct ldbminfo *li)
int mdb_init(struct ldbminfo *li, config_info *config_array)
{
dbmdb_ctx_t *conf = (dbmdb_ctx_t *)slapi_ch_calloc(1, sizeof(dbmdb_ctx_t));
- dbmdb_componentid = generate_componentid(NULL, "db-mdb");
+ if (dbmdb_componentid == NULL) {
+ dbmdb_componentid = generate_componentid(NULL, "db-mdb");
+ }
li->li_dblayer_config = conf;
strncpy(conf->home, li->li_directory, MAXPATHLEN-1);
diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c
index c4e87987f..ed17f979f 100644
--- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c
+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c
@@ -19,7 +19,7 @@
#include <prclist.h>
#include <glob.h>
-Slapi_ComponentId *dbmdb_componentid;
+Slapi_ComponentId *dbmdb_componentid = NULL;
#define BULKOP_MAX_RECORDS 100 /* Max records handled by a single bulk operations */
diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c
index 2d07db9b5..ae10ac7cf 100644
--- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c
+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c
@@ -49,6 +49,11 @@ dbmdb_cleanup(struct ldbminfo *li)
}
slapi_ch_free((void **)&(li->li_dblayer_config));
+ if (dbmdb_componentid != NULL) {
+ release_componentid(dbmdb_componentid);
+ dbmdb_componentid = NULL;
+ }
+
return 0;
}
--
2.49.0

View file

@ -0,0 +1,58 @@
From 510e0e9b35d94714048a06bc5067d43704f55503 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 7 Jul 2025 22:01:09 +0200
Subject: [PATCH] Issue 6848 - AddressSanitizer: leak in do_search
Bug Description:
When there's a BER decoding error and the function goes to
`free_and_return`, the `attrs` variable is not being freed because it's
only freed if `!psearch || rc != 0 || err != 0`, but `err` is still 0 at
that point.
If we reach `free_and_return` from the `ber_scanf` error path, `attrs`
was never set in the pblock with `slapi_pblock_set()`, so the
`slapi_pblock_get()` call will not retrieve the potentially partially
allocated `attrs` from the BER decoding.
Fixes: https://github.com/389ds/389-ds-base/issues/6848
Reviewed by: @tbordaz, @droideck (Thanks!)
---
ldap/servers/slapd/search.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/ldap/servers/slapd/search.c b/ldap/servers/slapd/search.c
index e9b2c3670..f9d03c090 100644
--- a/ldap/servers/slapd/search.c
+++ b/ldap/servers/slapd/search.c
@@ -235,6 +235,7 @@ do_search(Slapi_PBlock *pb)
log_search_access(pb, base, scope, fstr, "decoding error");
send_ldap_result(pb, LDAP_PROTOCOL_ERROR, NULL, NULL, 0,
NULL);
+ err = 1; /* Make sure we free everything */
goto free_and_return;
}
@@ -420,8 +421,17 @@ free_and_return:
if (!psearch || rc != 0 || err != 0) {
slapi_ch_free_string(&fstr);
slapi_filter_free(filter, 1);
- slapi_pblock_get(pb, SLAPI_SEARCH_ATTRS, &attrs);
- charray_free(attrs); /* passing NULL is fine */
+
+ /* Get attrs from pblock if it was set there, otherwise use local attrs */
+ char **pblock_attrs = NULL;
+ slapi_pblock_get(pb, SLAPI_SEARCH_ATTRS, &pblock_attrs);
+ if (pblock_attrs != NULL) {
+ charray_free(pblock_attrs); /* Free attrs from pblock */
+ slapi_pblock_set(pb, SLAPI_SEARCH_ATTRS, NULL);
+ } else if (attrs != NULL) {
+ /* Free attrs that were allocated but never put in pblock */
+ charray_free(attrs);
+ }
charray_free(gerattrs); /* passing NULL is fine */
/*
* Fix for defect 526719 / 553356 : Persistent search op failed.
--
2.49.0

View file

@ -0,0 +1,58 @@
From 7b3cd3147a8d3c41327768689962730d8fa28797 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Fri, 11 Jul 2025 12:32:38 +0200
Subject: [PATCH] Issue 6865 - AddressSanitizer: leak in
agmt_update_init_status
Bug Description:
We allocate an array of `LDAPMod *` pointers, but never free it:
```
=================================================================
==2748356==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7f05e8cb4a07 in __interceptor_malloc (/lib64/libasan.so.6+0xb4a07)
#1 0x7f05e85c0138 in slapi_ch_malloc (/usr/lib64/dirsrv/libslapd.so.0+0x1c0138)
#2 0x7f05e109e481 in agmt_update_init_status ldap/servers/plugins/replication/repl5_agmt.c:2583
#3 0x7f05e10a0aa5 in agmtlist_shutdown ldap/servers/plugins/replication/repl5_agmtlist.c:789
#4 0x7f05e10ab6bc in multisupplier_stop ldap/servers/plugins/replication/repl5_init.c:844
#5 0x7f05e10ab6bc in multisupplier_stop ldap/servers/plugins/replication/repl5_init.c:837
#6 0x7f05e862507d in plugin_call_func ldap/servers/slapd/plugin.c:2001
#7 0x7f05e8625be1 in plugin_call_one ldap/servers/slapd/plugin.c:1950
#8 0x7f05e8625be1 in plugin_dependency_closeall ldap/servers/slapd/plugin.c:1844
#9 0x55e1a7ff9815 in slapd_daemon ldap/servers/slapd/daemon.c:1275
#10 0x55e1a7fd36ef in main (/usr/sbin/ns-slapd+0x3e6ef)
#11 0x7f05e80295cf in __libc_start_call_main (/lib64/libc.so.6+0x295cf)
#12 0x7f05e802967f in __libc_start_main_alias_2 (/lib64/libc.so.6+0x2967f)
#13 0x55e1a7fd74a4 in _start (/usr/sbin/ns-slapd+0x424a4)
SUMMARY: AddressSanitizer: 24 byte(s) leaked in 1 allocation(s).
```
Fix Description:
Ensure `mods` is freed in the cleanup code.
Fixes: https://github.com/389ds/389-ds-base/issues/6865
Relates: https://github.com/389ds/389-ds-base/issues/6470
Reviewed by: @mreynolds389 (Thanks!)
---
ldap/servers/plugins/replication/repl5_agmt.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/ldap/servers/plugins/replication/repl5_agmt.c b/ldap/servers/plugins/replication/repl5_agmt.c
index c818c5857..0a81167b7 100644
--- a/ldap/servers/plugins/replication/repl5_agmt.c
+++ b/ldap/servers/plugins/replication/repl5_agmt.c
@@ -2743,6 +2743,7 @@ agmt_update_init_status(Repl_Agmt *ra)
} else {
PR_Unlock(ra->lock);
}
+ slapi_ch_free((void **)&mods);
slapi_mod_done(&smod_start_time);
slapi_mod_done(&smod_end_time);
slapi_mod_done(&smod_status);
--
2.49.0

View file

@ -0,0 +1,55 @@
From 81af69f415ffdf48861de00ba9a60614c0a02a87 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Fri, 11 Jul 2025 13:49:25 -0400
Subject: [PATCH] Issue 6868 - UI - schema attribute table expansion break
after moving to a new page
Description:
Used the wrong formula to select the expanded row for Attributes
Relates: https://github.com/389ds/389-ds-base/issues/6868
Reviewed by: spichugi(Thanks!)
---
src/cockpit/389-console/src/lib/database/databaseConfig.jsx | 1 -
src/cockpit/389-console/src/lib/schema/schemaTables.jsx | 4 ++--
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/src/cockpit/389-console/src/lib/database/databaseConfig.jsx b/src/cockpit/389-console/src/lib/database/databaseConfig.jsx
index adb8227d7..7a1ce3bc2 100644
--- a/src/cockpit/389-console/src/lib/database/databaseConfig.jsx
+++ b/src/cockpit/389-console/src/lib/database/databaseConfig.jsx
@@ -8,7 +8,6 @@ import {
Form,
Grid,
GridItem,
- Hr,
NumberInput,
Spinner,
Switch,
diff --git a/src/cockpit/389-console/src/lib/schema/schemaTables.jsx b/src/cockpit/389-console/src/lib/schema/schemaTables.jsx
index 609d4af15..446931ac2 100644
--- a/src/cockpit/389-console/src/lib/schema/schemaTables.jsx
+++ b/src/cockpit/389-console/src/lib/schema/schemaTables.jsx
@@ -465,7 +465,7 @@ class AttributesTable extends React.Component {
handleCollapse(event, rowKey, isOpen) {
const { rows, perPage, page } = this.state;
- const index = (perPage * (page - 1) * 2) + rowKey; // Adjust for page set
+ const index = (perPage * (page - 1)) + rowKey; // Adjust for page set
rows[index].isOpen = isOpen;
this.setState({
rows
@@ -525,7 +525,7 @@ class AttributesTable extends React.Component {
];
render() {
- const { perPage, page, sortBy, rows, noRows, columns } = this.state;
+ const { perPage, page, sortBy, rows, columns } = this.state;
const startIdx = (perPage * page) - perPage;
const tableRows = rows.slice(startIdx, startIdx + perPage);
--
2.49.0

View file

@ -0,0 +1,169 @@
From e4bd0eb2a4ad612efbf7824da022dd5403c71684 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Wed, 9 Jul 2025 14:18:50 -0400
Subject: [PATCH] Issue 6859 - str2filter is not fully applying matching rules
Description:
When we have an extended filter, one with a MR applied, it is ignored during
internal searches:
"(cn:CaseExactMatch:=Value)"
For internal searches we use str2filter() and it doesn't fully apply extended
search filter matching rules
Also needed to update attr uniqueness plugin to apply this change for mod
operations (previously only Adds were correctly handling these attribute
filters)
Relates: https://github.com/389ds/389-ds-base/issues/6857
Relates: https://github.com/389ds/389-ds-base/issues/6859
Reviewed by: spichugi & tbordaz(Thanks!!)
---
.../tests/suites/plugins/attruniq_test.py | 65 ++++++++++++++++++-
ldap/servers/plugins/uiduniq/uid.c | 7 ++
ldap/servers/slapd/plugin_mr.c | 2 +-
ldap/servers/slapd/str2filter.c | 8 +++
4 files changed, 79 insertions(+), 3 deletions(-)
diff --git a/dirsrvtests/tests/suites/plugins/attruniq_test.py b/dirsrvtests/tests/suites/plugins/attruniq_test.py
index aac659c29..046952df3 100644
--- a/dirsrvtests/tests/suites/plugins/attruniq_test.py
+++ b/dirsrvtests/tests/suites/plugins/attruniq_test.py
@@ -1,5 +1,5 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2021 Red Hat, Inc.
+# Copyright (C) 2025 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
@@ -324,4 +324,65 @@ def test_exclude_subtrees(topology_st):
cont2.delete()
cont3.delete()
attruniq.disable()
- attruniq.delete()
\ No newline at end of file
+ attruniq.delete()
+
+
+def test_matchingrule_attr(topology_st):
+ """ Test list extension MR attribute. Check for "cn" using CES (versus it
+ being defined as CIS)
+
+ :id: 5cde4342-6fa3-4225-b23d-0af918981075
+ :setup: Standalone instance
+ :steps:
+ 1. Setup and enable attribute uniqueness plugin to use CN attribute
+ with a matching rule of CaseExactMatch.
+ 2. Add user with CN value is lowercase
+ 3. Add second user with same lowercase CN which should be rejected
+ 4. Add second user with same CN value but with mixed case
+ 5. Modify second user replacing CN value to lc which should be rejected
+
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. Success
+ 5. Success
+ """
+
+ inst = topology_st.standalone
+
+ attruniq = AttributeUniquenessPlugin(inst,
+ dn="cn=attribute uniqueness,cn=plugins,cn=config")
+ attruniq.add_unique_attribute('cn:CaseExactMatch:')
+ attruniq.enable_all_subtrees()
+ attruniq.enable()
+ inst.restart()
+
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
+ users.create(properties={'cn': "common_name",
+ 'uid': "uid_name",
+ 'sn': "uid_name",
+ 'uidNumber': '1',
+ 'gidNumber': '11',
+ 'homeDirectory': '/home/uid_name'})
+
+ log.info('Add entry with the exact CN value which should be rejected')
+ with pytest.raises(ldap.CONSTRAINT_VIOLATION):
+ users.create(properties={'cn': "common_name",
+ 'uid': "uid_name2",
+ 'sn': "uid_name2",
+ 'uidNumber': '11',
+ 'gidNumber': '111',
+ 'homeDirectory': '/home/uid_name2'})
+
+ log.info('Add entry with the mixed case CN value which should be allowed')
+ user = users.create(properties={'cn': "Common_Name",
+ 'uid': "uid_name2",
+ 'sn': "uid_name2",
+ 'uidNumber': '11',
+ 'gidNumber': '111',
+ 'homeDirectory': '/home/uid_name2'})
+
+ log.info('Mod entry with exact case CN value which should be rejected')
+ with pytest.raises(ldap.CONSTRAINT_VIOLATION):
+ user.replace('cn', 'common_name')
diff --git a/ldap/servers/plugins/uiduniq/uid.c b/ldap/servers/plugins/uiduniq/uid.c
index 887e79d78..fdb1404a0 100644
--- a/ldap/servers/plugins/uiduniq/uid.c
+++ b/ldap/servers/plugins/uiduniq/uid.c
@@ -1178,6 +1178,10 @@ preop_modify(Slapi_PBlock *pb)
for (; mods && *mods; mods++) {
mod = *mods;
for (i = 0; attrNames && attrNames[i]; i++) {
+ char *attr_match = strchr(attrNames[i], ':');
+ if (attr_match != NULL) {
+ attr_match[0] = '\0';
+ }
if ((slapi_attr_type_cmp(mod->mod_type, attrNames[i], 1) == 0) && /* mod contains target attr */
(mod->mod_op & LDAP_MOD_BVALUES) && /* mod is bval encoded (not string val) */
(mod->mod_bvalues && mod->mod_bvalues[0]) && /* mod actually contains some values */
@@ -1186,6 +1190,9 @@ preop_modify(Slapi_PBlock *pb)
{
addMod(&checkmods, &checkmodsCapacity, &modcount, mod);
}
+ if (attr_match != NULL) {
+ attr_match[0] = ':';
+ }
}
}
if (modcount == 0) {
diff --git a/ldap/servers/slapd/plugin_mr.c b/ldap/servers/slapd/plugin_mr.c
index 9809a4374..757355dbc 100644
--- a/ldap/servers/slapd/plugin_mr.c
+++ b/ldap/servers/slapd/plugin_mr.c
@@ -625,7 +625,7 @@ attempt_mr_filter_create(mr_filter_t *f, struct slapdplugin *mrp, Slapi_PBlock *
int rc;
int32_t (*mrf_create)(Slapi_PBlock *) = NULL;
f->mrf_match = NULL;
- pblock_init(pb);
+ slapi_pblock_init(pb);
if (!(rc = slapi_pblock_set(pb, SLAPI_PLUGIN, mrp)) &&
!(rc = slapi_pblock_get(pb, SLAPI_PLUGIN_MR_FILTER_CREATE_FN, &mrf_create)) &&
mrf_create != NULL &&
diff --git a/ldap/servers/slapd/str2filter.c b/ldap/servers/slapd/str2filter.c
index 9fdc500f7..5620b7439 100644
--- a/ldap/servers/slapd/str2filter.c
+++ b/ldap/servers/slapd/str2filter.c
@@ -344,6 +344,14 @@ str2simple(char *str, int unescape_filter)
return NULL; /* error */
} else {
f->f_choice = LDAP_FILTER_EXTENDED;
+ if (f->f_mr_oid) {
+ /* apply the MR indexers */
+ rc = plugin_mr_filter_create(&f->f_mr);
+ if (rc) {
+ slapi_filter_free(f, 1);
+ return NULL; /* error */
+ }
+ }
}
} else if (str_find_star(value) == NULL) {
f->f_choice = LDAP_FILTER_EQUALITY;
--
2.49.0

View file

@ -0,0 +1,163 @@
From 48e7696fbebc14220b4b9a831c4a170003586152 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Tue, 15 Jul 2025 17:56:18 -0400
Subject: [PATCH] Issue 6872 - compressed log rotation creates files with world
readable permission
Description:
When compressing a log file, first create the empty file using open()
so we can set the correct permissions right from the start. gzopen()
always uses permission 644 and that is not safe. So after creating it
with open(), with the correct permissions, then pass the FD to gzdopen()
and write the compressed content.
relates: https://github.com/389ds/389-ds-base/issues/6872
Reviewed by: progier(Thanks!)
---
.../logging/logging_compression_test.py | 15 ++++++++--
ldap/servers/slapd/log.c | 28 +++++++++++++------
ldap/servers/slapd/schema.c | 2 +-
3 files changed, 33 insertions(+), 12 deletions(-)
diff --git a/dirsrvtests/tests/suites/logging/logging_compression_test.py b/dirsrvtests/tests/suites/logging/logging_compression_test.py
index e30874cc0..3a987d62c 100644
--- a/dirsrvtests/tests/suites/logging/logging_compression_test.py
+++ b/dirsrvtests/tests/suites/logging/logging_compression_test.py
@@ -1,5 +1,5 @@
# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2022 Red Hat, Inc.
+# Copyright (C) 2025 Red Hat, Inc.
# All rights reserved.
#
# License: GPL (version 3 or any later version).
@@ -22,12 +22,21 @@ log = logging.getLogger(__name__)
pytestmark = pytest.mark.tier1
+
def log_rotated_count(log_type, log_dir, check_compressed=False):
- # Check if the log was rotated
+ """
+ Check if the log was rotated and has the correct permissions
+ """
log_file = f'{log_dir}/{log_type}.2*'
if check_compressed:
log_file += ".gz"
- return len(glob.glob(log_file))
+ log_files = glob.glob(log_file)
+ for logf in log_files:
+ # Check permissions
+ st = os.stat(logf)
+ assert oct(st.st_mode) == '0o100600' # 0600
+
+ return len(log_files)
def update_and_sleep(inst, suffix, sleep=True):
diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c
index 06dae4d0b..eab837166 100644
--- a/ldap/servers/slapd/log.c
+++ b/ldap/servers/slapd/log.c
@@ -174,17 +174,28 @@ get_syslog_loglevel(int loglevel)
}
static int
-compress_log_file(char *log_name)
+compress_log_file(char *log_name, int32_t mode)
{
char gzip_log[BUFSIZ] = {0};
char buf[LOG_CHUNK] = {0};
size_t bytes_read = 0;
gzFile outfile = NULL;
FILE *source = NULL;
+ int fd = 0;
PR_snprintf(gzip_log, sizeof(gzip_log), "%s.gz", log_name);
- if ((outfile = gzopen(gzip_log,"wb")) == NULL) {
- /* Failed to open new gzip file */
+
+ /*
+ * Try to open the file as we may have an incorrect path. We also need to
+ * set the permissions using open() as gzopen() creates the file with
+ * 644 permissions (world readable - bad). So we create an empty file with
+ * the correct permissions, then we pass the FD to gzdopen() to write the
+ * compressed content.
+ */
+ if ((fd = open(gzip_log, O_WRONLY|O_CREAT|O_TRUNC, mode)) >= 0) {
+ /* FIle successfully created, now pass the FD to gzdopen() */
+ outfile = gzdopen(fd, "ab");
+ } else {
return -1;
}
@@ -193,6 +204,7 @@ compress_log_file(char *log_name)
gzclose(outfile);
return -1;
}
+
bytes_read = fread(buf, 1, LOG_CHUNK, source);
while (bytes_read > 0) {
int bytes_written = gzwrite(outfile, buf, bytes_read);
@@ -3402,7 +3414,7 @@ log__open_accesslogfile(int logfile_state, int locked)
return LOG_UNABLE_TO_OPENFILE;
}
} else if (loginfo.log_access_compress) {
- if (compress_log_file(newfile) != 0) {
+ if (compress_log_file(newfile, loginfo.log_access_mode) != 0) {
slapi_log_err(SLAPI_LOG_ERR, "log__open_auditfaillogfile",
"failed to compress rotated access log (%s)\n",
newfile);
@@ -3570,7 +3582,7 @@ log__open_securitylogfile(int logfile_state, int locked)
return LOG_UNABLE_TO_OPENFILE;
}
} else if (loginfo.log_security_compress) {
- if (compress_log_file(newfile) != 0) {
+ if (compress_log_file(newfile, loginfo.log_security_mode) != 0) {
slapi_log_err(SLAPI_LOG_ERR, "log__open_securitylogfile",
"failed to compress rotated security audit log (%s)\n",
newfile);
@@ -6288,7 +6300,7 @@ log__open_errorlogfile(int logfile_state, int locked)
return LOG_UNABLE_TO_OPENFILE;
}
} else if (loginfo.log_error_compress) {
- if (compress_log_file(newfile) != 0) {
+ if (compress_log_file(newfile, loginfo.log_error_mode) != 0) {
PR_snprintf(buffer, sizeof(buffer), "Failed to compress errors log file (%s)\n", newfile);
log__error_emergency(buffer, 1, 1);
} else {
@@ -6476,7 +6488,7 @@ log__open_auditlogfile(int logfile_state, int locked)
return LOG_UNABLE_TO_OPENFILE;
}
} else if (loginfo.log_audit_compress) {
- if (compress_log_file(newfile) != 0) {
+ if (compress_log_file(newfile, loginfo.log_audit_mode) != 0) {
slapi_log_err(SLAPI_LOG_ERR, "log__open_auditfaillogfile",
"failed to compress rotated audit log (%s)\n",
newfile);
@@ -6641,7 +6653,7 @@ log__open_auditfaillogfile(int logfile_state, int locked)
return LOG_UNABLE_TO_OPENFILE;
}
} else if (loginfo.log_auditfail_compress) {
- if (compress_log_file(newfile) != 0) {
+ if (compress_log_file(newfile, loginfo.log_auditfail_mode) != 0) {
slapi_log_err(SLAPI_LOG_ERR, "log__open_auditfaillogfile",
"failed to compress rotated auditfail log (%s)\n",
newfile);
diff --git a/ldap/servers/slapd/schema.c b/ldap/servers/slapd/schema.c
index a8e6b1210..9ef4ee4bf 100644
--- a/ldap/servers/slapd/schema.c
+++ b/ldap/servers/slapd/schema.c
@@ -903,7 +903,7 @@ oc_check_allowed_sv(Slapi_PBlock *pb, Slapi_Entry *e, const char *type, struct o
if (pb) {
PR_snprintf(errtext, sizeof(errtext),
- "attribute \"%s\" not allowed\n",
+ "attribute \"%s\" not allowed",
escape_string(type, ebuf));
slapi_pblock_set(pb, SLAPI_PB_RESULT_TEXT, errtext);
}
--
2.49.0

View file

@ -0,0 +1,590 @@
From a8fe12fcfbe0f81935972c3eddae638a281551d1 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Wed, 16 Jul 2025 20:54:48 -0400
Subject: [PATCH] Issue 6888 - Missing access JSON logging for TLS/Client auth
Description:
TLS/Client auth logging was not converted to JSON (auth.c got missed)
Relates: https://github.com/389ds/389-ds-base/issues/6888
Reviewed by: spichugi(Thanks!)
---
.../logging/access_json_logging_test.py | 96 ++++++++-
ldap/servers/slapd/accesslog.c | 114 +++++++++++
ldap/servers/slapd/auth.c | 182 +++++++++++++-----
ldap/servers/slapd/log.c | 2 +
ldap/servers/slapd/slapi-private.h | 10 +
5 files changed, 353 insertions(+), 51 deletions(-)
diff --git a/dirsrvtests/tests/suites/logging/access_json_logging_test.py b/dirsrvtests/tests/suites/logging/access_json_logging_test.py
index ae91dc487..f0dc861a7 100644
--- a/dirsrvtests/tests/suites/logging/access_json_logging_test.py
+++ b/dirsrvtests/tests/suites/logging/access_json_logging_test.py
@@ -19,6 +19,8 @@ from lib389.idm.user import UserAccounts
from lib389.dirsrv_log import DirsrvAccessJSONLog
from lib389.index import VLVSearch, VLVIndex
from lib389.tasks import Tasks
+from lib389.config import CertmapLegacy
+from lib389.nss_ssl import NssSsl
from ldap.controls.vlv import VLVRequestControl
from ldap.controls.sss import SSSRequestControl
from ldap.controls import SimplePagedResultsControl
@@ -67,11 +69,11 @@ def get_log_event(inst, op, key=None, val=None, key2=None, val2=None):
if val == str(event[key]).lower() and \
val2 == str(event[key2]).lower():
return event
-
- elif key is not None and key in event:
- val = str(val).lower()
- if val == str(event[key]).lower():
- return event
+ elif key is not None:
+ if key in event:
+ val = str(val).lower()
+ if val == str(event[key]).lower():
+ return event
else:
return event
@@ -163,6 +165,7 @@ def test_access_json_format(topo_m2, setup_test):
14. Test PAGED SEARCH is logged correctly
15. Test PERSISTENT SEARCH is logged correctly
16. Test EXTENDED OP
+ 17. Test TLS_INFO is logged correctly
:expectedresults:
1. Success
2. Success
@@ -180,6 +183,7 @@ def test_access_json_format(topo_m2, setup_test):
14. Success
15. Success
16. Success
+ 17. Success
"""
inst = topo_m2.ms["supplier1"]
@@ -560,6 +564,88 @@ def test_access_json_format(topo_m2, setup_test):
assert event['oid_name'] == "REPL_END_NSDS50_REPLICATION_REQUEST_OID"
assert event['name'] == "replication-multisupplier-extop"
+ #
+ # TLS INFO/TLS CLIENT INFO
+ #
+ RDN_TEST_USER = 'testuser'
+ RDN_TEST_USER_WRONG = 'testuser_wrong'
+ inst.enable_tls()
+ inst.restart()
+
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
+ user = users.create(properties={
+ 'uid': RDN_TEST_USER,
+ 'cn': RDN_TEST_USER,
+ 'sn': RDN_TEST_USER,
+ 'uidNumber': '1000',
+ 'gidNumber': '2000',
+ 'homeDirectory': f'/home/{RDN_TEST_USER}'
+ })
+
+ ssca_dir = inst.get_ssca_dir()
+ ssca = NssSsl(dbpath=ssca_dir)
+ ssca.create_rsa_user(RDN_TEST_USER)
+ ssca.create_rsa_user(RDN_TEST_USER_WRONG)
+
+ # Get the details of where the key and crt are.
+ tls_locs = ssca.get_rsa_user(RDN_TEST_USER)
+ tls_locs_wrong = ssca.get_rsa_user(RDN_TEST_USER_WRONG)
+
+ user.enroll_certificate(tls_locs['crt_der_path'])
+
+ # Turn on the certmap.
+ cm = CertmapLegacy(inst)
+ certmaps = cm.list()
+ certmaps['default']['DNComps'] = ''
+ certmaps['default']['FilterComps'] = ['cn']
+ certmaps['default']['VerifyCert'] = 'off'
+ cm.set(certmaps)
+
+ # Check that EXTERNAL is listed in supported mechns.
+ assert (inst.rootdse.supports_sasl_external())
+
+ # Restart to allow certmaps to be re-read: Note, we CAN NOT use post_open
+ # here, it breaks on auth. see lib389/__init__.py
+ inst.restart(post_open=False)
+
+ # Attempt a bind with TLS external
+ inst.open(saslmethod='EXTERNAL', connOnly=True, certdir=ssca_dir,
+ userkey=tls_locs['key'], usercert=tls_locs['crt'])
+ inst.restart()
+
+ event = get_log_event(inst, "TLS_INFO")
+ assert event is not None
+ assert 'tls_version' in event
+ assert 'keysize' in event
+ assert 'cipher' in event
+
+ event = get_log_event(inst, "TLS_CLIENT_INFO",
+ "subject",
+ "CN=testuser,O=testing,L=389ds,ST=Queensland,C=AU")
+ assert event is not None
+ assert 'tls_version' in event
+ assert 'keysize' in event
+ assert 'issuer' in event
+
+ event = get_log_event(inst, "TLS_CLIENT_INFO",
+ "client_dn",
+ "uid=testuser,ou=People,dc=example,dc=com")
+ assert event is not None
+ assert 'tls_version' in event
+ assert event['msg'] == "client bound"
+
+ # Check for failed certmap error
+ with pytest.raises(ldap.INVALID_CREDENTIALS):
+ inst.open(saslmethod='EXTERNAL', connOnly=True, certdir=ssca_dir,
+ userkey=tls_locs_wrong['key'],
+ usercert=tls_locs_wrong['crt'])
+
+ event = get_log_event(inst, "TLS_CLIENT_INFO", "err", -185)
+ assert event is not None
+ assert 'tls_version' in event
+ assert event['msg'] == "failed to map client certificate to LDAP DN"
+ assert event['err_msg'] == "Certificate couldn't be mapped to an ldap entry"
+
if __name__ == '__main__':
# Run isolated
diff --git a/ldap/servers/slapd/accesslog.c b/ldap/servers/slapd/accesslog.c
index 68022fe38..072ace203 100644
--- a/ldap/servers/slapd/accesslog.c
+++ b/ldap/servers/slapd/accesslog.c
@@ -1147,3 +1147,117 @@ slapd_log_access_sort(slapd_log_pblock *logpb)
return rc;
}
+
+/*
+ * TLS connection
+ *
+ * int32_t log_format
+ * time_t conn_time
+ * uint64_t conn_id
+ * const char *msg
+ * const char *tls_version
+ * int32_t keysize
+ * const char *cipher
+ * int32_t err
+ * const char *err_str
+ */
+int32_t
+slapd_log_access_tls(slapd_log_pblock *logpb)
+{
+ int32_t rc = 0;
+ char *msg = NULL;
+ json_object *json_obj = NULL;
+
+ if ((json_obj = build_base_obj(logpb, "TLS_INFO")) == NULL) {
+ return rc;
+ }
+
+ if (logpb->msg) {
+ json_object_object_add(json_obj, "msg", json_obj_add_str(logpb->msg));
+ }
+ if (logpb->tls_version) {
+ json_object_object_add(json_obj, "tls_version", json_obj_add_str(logpb->tls_version));
+ }
+ if (logpb->cipher) {
+ json_object_object_add(json_obj, "cipher", json_obj_add_str(logpb->cipher));
+ }
+ if (logpb->keysize) {
+ json_object_object_add(json_obj, "keysize", json_object_new_int(logpb->keysize));
+ }
+ if (logpb->err_str) {
+ json_object_object_add(json_obj, "err", json_object_new_int(logpb->err));
+ json_object_object_add(json_obj, "err_msg", json_obj_add_str(logpb->err_str));
+ }
+
+ /* Convert json object to string and log it */
+ msg = (char *)json_object_to_json_string_ext(json_obj, logpb->log_format);
+ rc = slapd_log_access_json(msg);
+
+ /* Done with JSON object - free it */
+ json_object_put(json_obj);
+
+ return rc;
+}
+
+/*
+ * TLS client auth
+ *
+ * int32_t log_format
+ * time_t conn_time
+ * uint64_t conn_id
+ * const char* tls_version
+ * const char* keysize
+ * const char* cipher
+ * const char* msg
+ * const char* subject
+ * const char* issuer
+ * int32_t err
+ * const char* err_str
+ * const char *client_dn
+ */
+int32_t
+slapd_log_access_tls_client_auth(slapd_log_pblock *logpb)
+{
+ int32_t rc = 0;
+ char *msg = NULL;
+ json_object *json_obj = NULL;
+
+ if ((json_obj = build_base_obj(logpb, "TLS_CLIENT_INFO")) == NULL) {
+ return rc;
+ }
+
+ if (logpb->tls_version) {
+ json_object_object_add(json_obj, "tls_version", json_obj_add_str(logpb->tls_version));
+ }
+ if (logpb->cipher) {
+ json_object_object_add(json_obj, "cipher", json_obj_add_str(logpb->cipher));
+ }
+ if (logpb->keysize) {
+ json_object_object_add(json_obj, "keysize", json_object_new_int(logpb->keysize));
+ }
+ if (logpb->subject) {
+ json_object_object_add(json_obj, "subject", json_obj_add_str(logpb->subject));
+ }
+ if (logpb->issuer) {
+ json_object_object_add(json_obj, "issuer", json_obj_add_str(logpb->issuer));
+ }
+ if (logpb->client_dn) {
+ json_object_object_add(json_obj, "client_dn", json_obj_add_str(logpb->client_dn));
+ }
+ if (logpb->msg) {
+ json_object_object_add(json_obj, "msg", json_obj_add_str(logpb->msg));
+ }
+ if (logpb->err_str) {
+ json_object_object_add(json_obj, "err", json_object_new_int(logpb->err));
+ json_object_object_add(json_obj, "err_msg", json_obj_add_str(logpb->err_str));
+ }
+
+ /* Convert json object to string and log it */
+ msg = (char *)json_object_to_json_string_ext(json_obj, logpb->log_format);
+ rc = slapd_log_access_json(msg);
+
+ /* Done with JSON object - free it */
+ json_object_put(json_obj);
+
+ return rc;
+}
diff --git a/ldap/servers/slapd/auth.c b/ldap/servers/slapd/auth.c
index e4231bf45..48e4b7129 100644
--- a/ldap/servers/slapd/auth.c
+++ b/ldap/servers/slapd/auth.c
@@ -1,6 +1,6 @@
/** BEGIN COPYRIGHT BLOCK
* Copyright (C) 2001 Sun Microsystems, Inc. Used by permission.
- * Copyright (C) 2005 Red Hat, Inc.
+ * Copyright (C) 2025 Red Hat, Inc.
* All rights reserved.
*
* License: GPL (version 3 or any later version).
@@ -363,19 +363,32 @@ handle_bad_certificate(void *clientData, PRFileDesc *prfd)
char sbuf[BUFSIZ], ibuf[BUFSIZ];
Connection *conn = (Connection *)clientData;
CERTCertificate *clientCert = slapd_ssl_peerCertificate(prfd);
-
PRErrorCode errorCode = PR_GetError();
char *subject = subject_of(clientCert);
char *issuer = issuer_of(clientCert);
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " " SLAPI_COMPONENT_NAME_NSPR " error %i (%s); unauthenticated client %s; issuer %s\n",
- conn->c_connid, errorCode, slapd_pr_strerror(errorCode),
- subject ? escape_string(subject, sbuf) : "NULL",
- issuer ? escape_string(issuer, ibuf) : "NULL");
+ int32_t log_format = config_get_accesslog_log_format();
+ slapd_log_pblock logpb = {0};
+
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.msg = "unauthenticated client";
+ logpb.subject = subject ? escape_string(subject, sbuf) : "NULL";
+ logpb.issuer = issuer ? escape_string(issuer, ibuf) : "NULL";
+ logpb.err = errorCode;
+ logpb.err_str = slapd_pr_strerror(errorCode);
+ slapd_log_access_tls_client_auth(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " " SLAPI_COMPONENT_NAME_NSPR " error %i (%s); unauthenticated client %s; issuer %s\n",
+ conn->c_connid, errorCode, slapd_pr_strerror(errorCode),
+ subject ? escape_string(subject, sbuf) : "NULL",
+ issuer ? escape_string(issuer, ibuf) : "NULL");
+ }
if (issuer)
- free(issuer);
+ slapi_ch_free_string(&issuer);
if (subject)
- free(subject);
+ slapi_ch_free_string(&subject);
if (clientCert)
CERT_DestroyCertificate(clientCert);
return -1; /* non-zero means reject this certificate */
@@ -394,7 +407,8 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData)
{
Connection *conn = (Connection *)clientData;
CERTCertificate *clientCert = slapd_ssl_peerCertificate(prfd);
-
+ int32_t log_format = config_get_accesslog_log_format();
+ slapd_log_pblock logpb = {0};
char *clientDN = NULL;
int keySize = 0;
char *cipher = NULL;
@@ -403,19 +417,39 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData)
SSLCipherSuiteInfo cipherInfo;
char *subject = NULL;
char sslversion[64];
+ int err = 0;
if ((slapd_ssl_getChannelInfo(prfd, &channelInfo, sizeof(channelInfo))) != SECSuccess) {
PRErrorCode errorCode = PR_GetError();
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " SSL failed to obtain channel info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n",
- conn->c_connid, errorCode, slapd_pr_strerror(errorCode));
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.err = errorCode;
+ logpb.err_str = slapd_pr_strerror(errorCode);
+ logpb.msg = "SSL failed to obtain channel info; " SLAPI_COMPONENT_NAME_NSPR;
+ slapd_log_access_tls(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " SSL failed to obtain channel info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n",
+ conn->c_connid, errorCode, slapd_pr_strerror(errorCode));
+ }
goto done;
}
+
if ((slapd_ssl_getCipherSuiteInfo(channelInfo.cipherSuite, &cipherInfo, sizeof(cipherInfo))) != SECSuccess) {
PRErrorCode errorCode = PR_GetError();
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " SSL failed to obtain cipher info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n",
- conn->c_connid, errorCode, slapd_pr_strerror(errorCode));
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.err = errorCode;
+ logpb.err_str = slapd_pr_strerror(errorCode);
+ logpb.msg = "SSL failed to obtain cipher info; " SLAPI_COMPONENT_NAME_NSPR;
+ slapd_log_access_tls(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " SSL failed to obtain cipher info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n",
+ conn->c_connid, errorCode, slapd_pr_strerror(errorCode));
+ }
goto done;
}
@@ -434,47 +468,84 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData)
if (config_get_SSLclientAuth() == SLAPD_SSLCLIENTAUTH_OFF) {
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion, sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n",
- conn->c_connid,
- sslversion, keySize, cipher ? cipher : "NULL");
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.tls_version = sslversion;
+ logpb.keysize = keySize;
+ logpb.cipher = cipher ? cipher : "NULL";
+ slapd_log_access_tls(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n",
+ conn->c_connid,
+ sslversion, keySize, cipher ? cipher : "NULL");
+ }
goto done;
}
if (clientCert == NULL) {
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion, sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n",
- conn->c_connid,
- sslversion, keySize, cipher ? cipher : "NULL");
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.tls_version = sslversion;
+ logpb.keysize = keySize;
+ logpb.cipher = cipher ? cipher : "NULL";
+ slapd_log_access_tls(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n",
+ conn->c_connid,
+ sslversion, keySize, cipher ? cipher : "NULL");
+ }
} else {
subject = subject_of(clientCert);
if (!subject) {
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion,
sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " %s %i-bit %s; missing subject\n",
- conn->c_connid,
- sslversion, keySize, cipher ? cipher : "NULL");
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.msg = "missing subject";
+ logpb.tls_version = sslversion;
+ logpb.keysize = keySize;
+ logpb.cipher = cipher ? cipher : "NULL";
+ slapd_log_access_tls_client_auth(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " %s %i-bit %s; missing subject\n",
+ conn->c_connid,
+ sslversion, keySize, cipher ? cipher : "NULL");
+ }
goto done;
- }
- {
+ } else {
char *issuer = issuer_of(clientCert);
char sbuf[BUFSIZ], ibuf[BUFSIZ];
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion,
sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " %s %i-bit %s; client %s; issuer %s\n",
- conn->c_connid,
- sslversion, keySize,
- cipher ? cipher : "NULL",
- escape_string(subject, sbuf),
- issuer ? escape_string(issuer, ibuf) : "NULL");
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.tls_version = sslversion;
+ logpb.keysize = keySize;
+ logpb.cipher = cipher ? cipher : "NULL";
+ logpb.subject = escape_string(subject, sbuf);
+ logpb.issuer = issuer ? escape_string(issuer, ibuf) : "NULL";
+ slapd_log_access_tls_client_auth(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " %s %i-bit %s; client %s; issuer %s\n",
+ conn->c_connid,
+ sslversion, keySize,
+ cipher ? cipher : "NULL",
+ escape_string(subject, sbuf),
+ issuer ? escape_string(issuer, ibuf) : "NULL");
+ }
if (issuer)
- free(issuer);
+ slapi_ch_free_string(&issuer);
}
slapi_dn_normalize(subject);
{
LDAPMessage *chain = NULL;
char *basedn = config_get_basedn();
- int err;
err = ldapu_cert_to_ldap_entry(clientCert, internal_ld, basedn ? basedn : "" /*baseDN*/, &chain);
if (err == LDAPU_SUCCESS && chain) {
@@ -505,18 +576,37 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData)
slapi_sdn_free(&sdn);
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion,
sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " %s client bound as %s\n",
- conn->c_connid,
- sslversion, clientDN);
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.msg = "client bound";
+ logpb.tls_version = sslversion;
+ logpb.client_dn = clientDN;
+ slapd_log_access_tls_client_auth(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " %s client bound as %s\n",
+ conn->c_connid,
+ sslversion, clientDN);
+ }
} else if (clientCert != NULL) {
(void)slapi_getSSLVersion_str(channelInfo.protocolVersion,
sslversion, sizeof(sslversion));
- slapi_log_access(LDAP_DEBUG_STATS,
- "conn=%" PRIu64 " %s failed to map client "
- "certificate to LDAP DN (%s)\n",
- conn->c_connid,
- sslversion, extraErrorMsg);
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ slapd_log_pblock_init(&logpb, log_format, NULL);
+ logpb.conn_id = conn->c_connid;
+ logpb.msg = "failed to map client certificate to LDAP DN";
+ logpb.tls_version = sslversion;
+ logpb.err = err;
+ logpb.err_str = extraErrorMsg;
+ slapd_log_access_tls_client_auth(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " %s failed to map client "
+ "certificate to LDAP DN (%s)\n",
+ conn->c_connid,
+ sslversion, extraErrorMsg);
+ }
}
/*
diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c
index eab837166..06792a55a 100644
--- a/ldap/servers/slapd/log.c
+++ b/ldap/servers/slapd/log.c
@@ -7270,6 +7270,8 @@ slapd_log_pblock_init(slapd_log_pblock *logpb, int32_t log_format, Slapi_PBlock
slapi_pblock_get(pb, SLAPI_CONNECTION, &conn);
}
+ memset(logpb, 0, sizeof(slapd_log_pblock));
+
logpb->loginfo = &loginfo;
logpb->level = 256; /* default log level */
logpb->log_format = log_format;
diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h
index 6438a81fe..da232ae2f 100644
--- a/ldap/servers/slapd/slapi-private.h
+++ b/ldap/servers/slapd/slapi-private.h
@@ -1549,6 +1549,13 @@ typedef struct slapd_log_pblock {
PRBool using_tls;
PRBool haproxied;
const char *bind_dn;
+ /* TLS */
+ const char *tls_version;
+ int32_t keysize;
+ const char *cipher;
+ const char *subject;
+ const char *issuer;
+ const char *client_dn;
/* Close connection */
const char *close_error;
const char *close_reason;
@@ -1619,6 +1626,7 @@ typedef struct slapd_log_pblock {
const char *oid;
const char *msg;
const char *name;
+ const char *err_str;
LDAPControl **request_controls;
LDAPControl **response_controls;
} slapd_log_pblock;
@@ -1645,6 +1653,8 @@ int32_t slapd_log_access_entry(slapd_log_pblock *logpb);
int32_t slapd_log_access_referral(slapd_log_pblock *logpb);
int32_t slapd_log_access_extop(slapd_log_pblock *logpb);
int32_t slapd_log_access_sort(slapd_log_pblock *logpb);
+int32_t slapd_log_access_tls(slapd_log_pblock *logpb);
+int32_t slapd_log_access_tls_client_auth(slapd_log_pblock *logpb);
#ifdef __cplusplus
}
--
2.49.0

View file

@ -0,0 +1,67 @@
From c44c45797a0e92fcdb6f0cc08f56816c7d77ffac Mon Sep 17 00:00:00 2001
From: Anuar Beisembayev <111912342+abeisemb@users.noreply.github.com>
Date: Wed, 23 Jul 2025 23:48:11 -0400
Subject: [PATCH] Issue 6772 - dsconf - Replicas with the "consumer" role allow
for viewing and modification of their changelog. (#6773)
dsconf currently allows users to set and retrieve changelogs in consumer replicas, which do not have officially supported changelogs. This can lead to undefined behavior and confusion.
This commit prints a warning message if the user tries to interact with a changelog on a consumer replica.
Resolves: https://github.com/389ds/389-ds-base/issues/6772
Reviewed by: @droideck
---
src/lib389/lib389/cli_conf/replication.py | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/src/lib389/lib389/cli_conf/replication.py b/src/lib389/lib389/cli_conf/replication.py
index 6f77f34ca..a18bf83ca 100644
--- a/src/lib389/lib389/cli_conf/replication.py
+++ b/src/lib389/lib389/cli_conf/replication.py
@@ -686,6 +686,9 @@ def set_per_backend_cl(inst, basedn, log, args):
replace_list = []
did_something = False
+ if (is_replica_role_consumer(inst, suffix)):
+ log.info("Warning: Changelogs are not supported for consumer replicas. You may run into undefined behavior.")
+
if args.encrypt:
cl.replace('nsslapd-encryptionalgorithm', 'AES')
del args.encrypt
@@ -715,6 +718,10 @@ def set_per_backend_cl(inst, basedn, log, args):
# that means there is a changelog config entry per backend (aka suffix)
def get_per_backend_cl(inst, basedn, log, args):
suffix = args.suffix
+
+ if (is_replica_role_consumer(inst, suffix)):
+ log.info("Warning: Changelogs are not supported for consumer replicas. You may run into undefined behavior.")
+
cl = Changelog(inst, suffix)
if args and args.json:
log.info(cl.get_all_attrs_json())
@@ -822,6 +829,22 @@ def del_repl_manager(inst, basedn, log, args):
log.info("Successfully deleted replication manager: " + manager_dn)
+def is_replica_role_consumer(inst, suffix):
+ """Helper function for get_per_backend_cl and set_per_backend_cl.
+ Makes sure the instance in question is not a consumer, which is a role that
+ does not support changelogs.
+ """
+ replicas = Replicas(inst)
+ try:
+ replica = replicas.get(suffix)
+ role = replica.get_role()
+ except ldap.NO_SUCH_OBJECT:
+ raise ValueError(f"Backend \"{suffix}\" is not enabled for replication")
+
+ if role == ReplicaRole.CONSUMER:
+ return True
+ else:
+ return False
#
# Agreements
--
2.49.0

View file

@ -0,0 +1,360 @@
From b5134beedc719094193331ddbff0ca75316f93ff Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Mon, 21 Jul 2025 18:07:21 -0400
Subject: [PATCH] Issue 6893 - Log user that is updated during password modify
extended operation
Description:
When a user's password is updated via an extended operation (password modify
plugin) we only log the bind DN and not what user was updated. While "internal
operation" logging will display the the user it should be logged by the default
logging level.
Add access logging using "EXT_INFO" for the old logging format, and
"EXTENDED_OP_INFO" for json logging where we display the bind dn, target
dn, and message.
Relates: https://github.com/389ds/389-ds-base/issues/6893
Reviewed by: spichugi & tbordaz(Thanks!!)
---
.../logging/access_json_logging_test.py | 98 +++++++++++++++----
ldap/servers/slapd/accesslog.c | 47 +++++++++
ldap/servers/slapd/passwd_extop.c | 69 +++++++------
ldap/servers/slapd/slapi-private.h | 1 +
4 files changed, 169 insertions(+), 46 deletions(-)
diff --git a/dirsrvtests/tests/suites/logging/access_json_logging_test.py b/dirsrvtests/tests/suites/logging/access_json_logging_test.py
index f0dc861a7..699bd8c4d 100644
--- a/dirsrvtests/tests/suites/logging/access_json_logging_test.py
+++ b/dirsrvtests/tests/suites/logging/access_json_logging_test.py
@@ -11,7 +11,7 @@ import os
import time
import ldap
import pytest
-from lib389._constants import DEFAULT_SUFFIX, PASSWORD, LOG_ACCESS_LEVEL
+from lib389._constants import DEFAULT_SUFFIX, PASSWORD, LOG_ACCESS_LEVEL, DN_DM
from lib389.properties import TASK_WAIT
from lib389.topologies import topology_m2 as topo_m2
from lib389.idm.group import Groups
@@ -548,22 +548,6 @@ def test_access_json_format(topo_m2, setup_test):
"2.16.840.1.113730.3.4.3",
"LDAP_CONTROL_PERSISTENTSEARCH")
- #
- # Extended op
- #
- log.info("Test EXTENDED_OP")
- event = get_log_event(inst, "EXTENDED_OP", "oid",
- "2.16.840.1.113730.3.5.12")
- assert event is not None
- assert event['oid_name'] == "REPL_START_NSDS90_REPLICATION_REQUEST_OID"
- assert event['name'] == "replication-multisupplier-extop"
-
- event = get_log_event(inst, "EXTENDED_OP", "oid",
- "2.16.840.1.113730.3.5.5")
- assert event is not None
- assert event['oid_name'] == "REPL_END_NSDS50_REPLICATION_REQUEST_OID"
- assert event['name'] == "replication-multisupplier-extop"
-
#
# TLS INFO/TLS CLIENT INFO
#
@@ -579,7 +563,8 @@ def test_access_json_format(topo_m2, setup_test):
'sn': RDN_TEST_USER,
'uidNumber': '1000',
'gidNumber': '2000',
- 'homeDirectory': f'/home/{RDN_TEST_USER}'
+ 'homeDirectory': f'/home/{RDN_TEST_USER}',
+ 'userpassword': 'password'
})
ssca_dir = inst.get_ssca_dir()
@@ -646,6 +631,83 @@ def test_access_json_format(topo_m2, setup_test):
assert event['msg'] == "failed to map client certificate to LDAP DN"
assert event['err_msg'] == "Certificate couldn't be mapped to an ldap entry"
+ #
+ # Extended op
+ #
+ log.info("Test EXTENDED_OP")
+ event = get_log_event(inst, "EXTENDED_OP", "oid",
+ "2.16.840.1.113730.3.5.12")
+ assert event is not None
+ assert event['oid_name'] == "REPL_START_NSDS90_REPLICATION_REQUEST_OID"
+ assert event['name'] == "replication-multisupplier-extop"
+
+ event = get_log_event(inst, "EXTENDED_OP", "oid",
+ "2.16.840.1.113730.3.5.5")
+ assert event is not None
+ assert event['oid_name'] == "REPL_END_NSDS50_REPLICATION_REQUEST_OID"
+ assert event['name'] == "replication-multisupplier-extop"
+
+ #
+ # Extended op info
+ #
+ log.info("Test EXTENDED_OP_INFO")
+ OLD_PASSWD = 'password'
+ NEW_PASSWD = 'newpassword'
+
+ assert inst.simple_bind_s(DN_DM, PASSWORD)
+
+ assert inst.passwd_s(user.dn, OLD_PASSWD, NEW_PASSWD)
+ event = get_log_event(inst, "EXTENDED_OP_INFO", "name",
+ "passwd_modify_plugin")
+ assert event is not None
+ assert event['bind_dn'] == "cn=directory manager"
+ assert event['target_dn'] == user.dn.lower()
+ assert event['msg'] == "success"
+
+ # Test no such object
+ BAD_DN = user.dn + ",dc=not"
+ with pytest.raises(ldap.NO_SUCH_OBJECT):
+ inst.passwd_s(BAD_DN, OLD_PASSWD, NEW_PASSWD)
+
+ event = get_log_event(inst, "EXTENDED_OP_INFO", "target_dn", BAD_DN)
+ assert event is not None
+ assert event['bind_dn'] == "cn=directory manager"
+ assert event['target_dn'] == BAD_DN.lower()
+ assert event['msg'] == "No such entry exists."
+
+ # Test invalid old password
+ with pytest.raises(ldap.INVALID_CREDENTIALS):
+ inst.passwd_s(user.dn, "not_the_old_pw", NEW_PASSWD)
+ event = get_log_event(inst, "EXTENDED_OP_INFO", "err", 49)
+ assert event is not None
+ assert event['bind_dn'] == "cn=directory manager"
+ assert event['target_dn'] == user.dn.lower()
+ assert event['msg'] == "Invalid oldPasswd value."
+
+ # Test user without permissions
+ user2 = users.create(properties={
+ 'uid': RDN_TEST_USER + "2",
+ 'cn': RDN_TEST_USER + "2",
+ 'sn': RDN_TEST_USER + "2",
+ 'uidNumber': '1001',
+ 'gidNumber': '2001',
+ 'homeDirectory': f'/home/{RDN_TEST_USER + "2"}',
+ 'userpassword': 'password'
+ })
+ inst.simple_bind_s(user2.dn, 'password')
+ with pytest.raises(ldap.INSUFFICIENT_ACCESS):
+ inst.passwd_s(user.dn, NEW_PASSWD, OLD_PASSWD)
+ event = get_log_event(inst, "EXTENDED_OP_INFO", "err", 50)
+ assert event is not None
+ assert event['bind_dn'] == user2.dn.lower()
+ assert event['target_dn'] == user.dn.lower()
+ assert event['msg'] == "Insufficient access rights"
+
+
+ # Reset bind
+ inst.simple_bind_s(DN_DM, PASSWORD)
+
+
if __name__ == '__main__':
# Run isolated
diff --git a/ldap/servers/slapd/accesslog.c b/ldap/servers/slapd/accesslog.c
index 072ace203..46228d4a1 100644
--- a/ldap/servers/slapd/accesslog.c
+++ b/ldap/servers/slapd/accesslog.c
@@ -1113,6 +1113,53 @@ slapd_log_access_extop(slapd_log_pblock *logpb)
return rc;
}
+/*
+ * Extended operation information
+ *
+ * int32_t log_format
+ * time_t conn_time
+ * uint64_t conn_id
+ * int32_t op_id
+ * const char *name
+ * const char *bind_dn
+ * const char *tartet_dn
+ * const char *msg
+ */
+int32_t
+slapd_log_access_extop_info(slapd_log_pblock *logpb)
+{
+ int32_t rc = 0;
+ char *msg = NULL;
+ json_object *json_obj = NULL;
+
+ if ((json_obj = build_base_obj(logpb, "EXTENDED_OP_INFO")) == NULL) {
+ return rc;
+ }
+
+ if (logpb->name) {
+ json_object_object_add(json_obj, "name", json_obj_add_str(logpb->name));
+ }
+ if (logpb->target_dn) {
+ json_object_object_add(json_obj, "target_dn", json_obj_add_str(logpb->target_dn));
+ }
+ if (logpb->bind_dn) {
+ json_object_object_add(json_obj, "bind_dn", json_obj_add_str(logpb->bind_dn));
+ }
+ if (logpb->msg) {
+ json_object_object_add(json_obj, "msg", json_obj_add_str(logpb->msg));
+ }
+ json_object_object_add(json_obj, "err", json_object_new_int(logpb->err));
+
+ /* Convert json object to string and log it */
+ msg = (char *)json_object_to_json_string_ext(json_obj, logpb->log_format);
+ rc = slapd_log_access_json(msg);
+
+ /* Done with JSON object - free it */
+ json_object_put(json_obj);
+
+ return rc;
+}
+
/*
* Sort
*
diff --git a/ldap/servers/slapd/passwd_extop.c b/ldap/servers/slapd/passwd_extop.c
index 4bb60afd6..69bb3494c 100644
--- a/ldap/servers/slapd/passwd_extop.c
+++ b/ldap/servers/slapd/passwd_extop.c
@@ -465,12 +465,14 @@ passwd_modify_extop(Slapi_PBlock *pb)
BerElement *response_ber = NULL;
Slapi_Entry *targetEntry = NULL;
Connection *conn = NULL;
+ Operation *pb_op = NULL;
LDAPControl **req_controls = NULL;
LDAPControl **resp_controls = NULL;
passwdPolicy *pwpolicy = NULL;
Slapi_DN *target_sdn = NULL;
Slapi_Entry *referrals = NULL;
- /* Slapi_DN sdn; */
+ Slapi_Backend *be = NULL;
+ int32_t log_format = config_get_accesslog_log_format();
slapi_log_err(SLAPI_LOG_TRACE, "passwd_modify_extop", "=>\n");
@@ -647,7 +649,7 @@ parse_req_done:
}
dn = slapi_sdn_get_ndn(target_sdn);
if (dn == NULL || *dn == '\0') {
- /* Refuse the operation because they're bound anonymously */
+ /* Invalid DN - refuse the operation */
errMesg = "Invalid dn.";
rc = LDAP_INVALID_DN_SYNTAX;
goto free_and_return;
@@ -724,14 +726,19 @@ parse_req_done:
ber_free(response_ber, 1);
}
- slapi_pblock_set(pb, SLAPI_ORIGINAL_TARGET, (void *)dn);
+ slapi_pblock_get(pb, SLAPI_OPERATION, &pb_op);
+ if (pb_op == NULL) {
+ slapi_log_err(SLAPI_LOG_ERR, "passwd_modify_extop", "pb_op is NULL\n");
+ goto free_and_return;
+ }
+ slapi_pblock_set(pb, SLAPI_ORIGINAL_TARGET, (void *)dn);
/* Now we have the DN, look for the entry */
ret = passwd_modify_getEntry(dn, &targetEntry);
/* If we can't find the entry, then that's an error */
if (ret) {
/* Couldn't find the entry, fail */
- errMesg = "No such Entry exists.";
+ errMesg = "No such entry exists.";
rc = LDAP_NO_SUCH_OBJECT;
goto free_and_return;
}
@@ -742,30 +749,18 @@ parse_req_done:
leak any useful information to the client such as current password
wrong, etc.
*/
- Operation *pb_op = NULL;
- slapi_pblock_get(pb, SLAPI_OPERATION, &pb_op);
- if (pb_op == NULL) {
- slapi_log_err(SLAPI_LOG_ERR, "passwd_modify_extop", "pb_op is NULL\n");
- goto free_and_return;
- }
-
operation_set_target_spec(pb_op, slapi_entry_get_sdn(targetEntry));
slapi_pblock_set(pb, SLAPI_REQUESTOR_ISROOT, &pb_op->o_isroot);
- /* In order to perform the access control check , we need to select a backend (even though
- * we don't actually need it otherwise).
- */
- {
- Slapi_Backend *be = NULL;
-
- be = slapi_mapping_tree_find_backend_for_sdn(slapi_entry_get_sdn(targetEntry));
- if (NULL == be) {
- errMesg = "Failed to find backend for target entry";
- rc = LDAP_OPERATIONS_ERROR;
- goto free_and_return;
- }
- slapi_pblock_set(pb, SLAPI_BACKEND, be);
+ /* In order to perform the access control check, we need to select a backend (even though
+ * we don't actually need it otherwise). */
+ be = slapi_mapping_tree_find_backend_for_sdn(slapi_entry_get_sdn(targetEntry));
+ if (NULL == be) {
+ errMesg = "Failed to find backend for target entry";
+ rc = LDAP_NO_SUCH_OBJECT;
+ goto free_and_return;
}
+ slapi_pblock_set(pb, SLAPI_BACKEND, be);
/* Check if the pwpolicy control is present */
slapi_pblock_get(pb, SLAPI_PWPOLICY, &need_pwpolicy_ctrl);
@@ -797,10 +792,7 @@ parse_req_done:
/* Check if password policy allows users to change their passwords. We need to do
* this here since the normal modify code doesn't perform this check for
* internal operations. */
-
- Connection *pb_conn;
- slapi_pblock_get(pb, SLAPI_CONNECTION, &pb_conn);
- if (!pb_op->o_isroot && !pb_conn->c_needpw && !pwpolicy->pw_change) {
+ if (!pb_op->o_isroot && !conn->c_needpw && !pwpolicy->pw_change) {
if (NULL == bindSDN) {
bindSDN = slapi_sdn_new_normdn_byref(bindDN);
}
@@ -848,6 +840,27 @@ free_and_return:
slapi_log_err(SLAPI_LOG_PLUGIN, "passwd_modify_extop",
"%s\n", errMesg ? errMesg : "success");
+ if (dn) {
+ /* Log the target ndn (if we have a target ndn) */
+ if (log_format != LOG_FORMAT_DEFAULT) {
+ /* JSON logging */
+ slapd_log_pblock logpb = {0};
+ slapd_log_pblock_init(&logpb, log_format, pb);
+ logpb.name = "passwd_modify_plugin";
+ logpb.target_dn = dn;
+ logpb.bind_dn = bindDN;
+ logpb.msg = errMesg ? errMesg : "success";
+ logpb.err = rc;
+ slapd_log_access_extop_info(&logpb);
+ } else {
+ slapi_log_access(LDAP_DEBUG_STATS,
+ "conn=%" PRIu64 " op=%d EXT_INFO name=\"passwd_modify_plugin\" bind_dn=\"%s\" target_dn=\"%s\" msg=\"%s\" rc=%d\n",
+ conn ? conn->c_connid : -1, pb_op ? pb_op->o_opid : -1,
+ bindDN ? bindDN : "", dn,
+ errMesg ? errMesg : "success", rc);
+ }
+ }
+
if ((rc == LDAP_REFERRAL) && (referrals)) {
send_referrals_from_entry(pb, referrals);
} else {
diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h
index da232ae2f..e9abf8b75 100644
--- a/ldap/servers/slapd/slapi-private.h
+++ b/ldap/servers/slapd/slapi-private.h
@@ -1652,6 +1652,7 @@ int32_t slapd_log_access_vlv(slapd_log_pblock *logpb);
int32_t slapd_log_access_entry(slapd_log_pblock *logpb);
int32_t slapd_log_access_referral(slapd_log_pblock *logpb);
int32_t slapd_log_access_extop(slapd_log_pblock *logpb);
+int32_t slapd_log_access_extop_info(slapd_log_pblock *logpb);
int32_t slapd_log_access_sort(slapd_log_pblock *logpb);
int32_t slapd_log_access_tls(slapd_log_pblock *logpb);
int32_t slapd_log_access_tls_client_auth(slapd_log_pblock *logpb);
--
2.49.0

View file

@ -0,0 +1,53 @@
From 048aa39d4c4955f6d9e3b018d4b1fc057f52d130 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Thu, 24 Jul 2025 19:09:40 +0200
Subject: [PATCH] Issue 6901 - Update changelog trimming logging
Description:
* Set SLAPI_LOG_ERR for message in `_cl5DispatchTrimThread`
* Set correct function name for logs in `_cl5TrimEntry`.
* Add number of scanned entries to the log.
Fixes: https://github.com/389ds/389-ds-base/issues/6901
Reviewed by: @mreynolds389, @progier389 (Thanks!)
---
ldap/servers/plugins/replication/cl5_api.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/ldap/servers/plugins/replication/cl5_api.c b/ldap/servers/plugins/replication/cl5_api.c
index 3c356abc0..1d62aa020 100644
--- a/ldap/servers/plugins/replication/cl5_api.c
+++ b/ldap/servers/plugins/replication/cl5_api.c
@@ -2007,7 +2007,7 @@ _cl5DispatchTrimThread(Replica *replica)
(void *)replica, PR_PRIORITY_NORMAL, PR_GLOBAL_THREAD,
PR_UNJOINABLE_THREAD, DEFAULT_THREAD_STACKSIZE);
if (NULL == pth) {
- slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl,
+ slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name_cl,
"_cl5DispatchTrimThread - Failed to create trimming thread for %s"
"; NSPR error - %d\n", replica_get_name(replica),
PR_GetError());
@@ -2788,7 +2788,7 @@ _cl5TrimEntry(dbi_val_t *key, dbi_val_t *data, void *ctx)
return DBI_RC_NOTFOUND;
} else {
slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl,
- "_cl5TrimReplica - Changelog purge skipped anchor csn %s\n",
+ "_cl5TrimEntry - Changelog purge skipped anchor csn %s\n",
(char*)key->data);
return DBI_RC_SUCCESS;
}
@@ -2867,8 +2867,8 @@ _cl5TrimReplica(Replica *r)
slapi_ch_free((void**)&dblcictx.rids);
if (dblcictx.changed.tot) {
- slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl, "_cl5TrimReplica - Trimmed %ld changes from the changelog\n",
- dblcictx.changed.tot);
+ slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl, "_cl5TrimReplica - Scanned %ld records, and trimmed %ld changes from the changelog\n",
+ dblcictx.seen.tot, dblcictx.changed.tot);
}
}
--
2.49.0

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,380 @@
From 697f0ed364b8649141adc283a6a45702d815421e Mon Sep 17 00:00:00 2001
From: Akshay Adhikari <aadhikar@redhat.com>
Date: Mon, 28 Jul 2025 18:14:15 +0530
Subject: [PATCH] Issue 6663 - Fix NULL subsystem crash in JSON error logging
(#6883)
Description: Fixes crash in JSON error logging when subsystem is NULL.
Parametrized test case for better debugging.
Relates: https://github.com/389ds/389-ds-base/issues/6663
Reviewed by: @mreynolds389
---
.../tests/suites/clu/dsconf_logging.py | 168 ------------------
.../tests/suites/clu/dsconf_logging_test.py | 164 +++++++++++++++++
ldap/servers/slapd/log.c | 2 +-
3 files changed, 165 insertions(+), 169 deletions(-)
delete mode 100644 dirsrvtests/tests/suites/clu/dsconf_logging.py
create mode 100644 dirsrvtests/tests/suites/clu/dsconf_logging_test.py
diff --git a/dirsrvtests/tests/suites/clu/dsconf_logging.py b/dirsrvtests/tests/suites/clu/dsconf_logging.py
deleted file mode 100644
index 1c2f7fc2e..000000000
--- a/dirsrvtests/tests/suites/clu/dsconf_logging.py
+++ /dev/null
@@ -1,168 +0,0 @@
-# --- BEGIN COPYRIGHT BLOCK ---
-# Copyright (C) 2025 Red Hat, Inc.
-# All rights reserved.
-#
-# License: GPL (version 3 or any later version).
-# See LICENSE for details.
-# --- END COPYRIGHT BLOCK ---
-#
-import json
-import subprocess
-import logging
-import pytest
-from lib389._constants import DN_DM
-from lib389.topologies import topology_st as topo
-
-pytestmark = pytest.mark.tier1
-
-log = logging.getLogger(__name__)
-
-SETTINGS = [
- ('logging-enabled', None),
- ('logging-disabled', None),
- ('mode', '700'),
- ('compress-enabled', None),
- ('compress-disabled', None),
- ('buffering-enabled', None),
- ('buffering-disabled', None),
- ('max-logs', '4'),
- ('max-logsize', '7'),
- ('rotation-interval', '2'),
- ('rotation-interval-unit', 'week'),
- ('rotation-tod-enabled', None),
- ('rotation-tod-disabled', None),
- ('rotation-tod-hour', '12'),
- ('rotation-tod-minute', '20'),
- ('deletion-interval', '3'),
- ('deletion-interval-unit', 'day'),
- ('max-disk-space', '20'),
- ('free-disk-space', '2'),
-]
-
-DEFAULT_TIME_FORMAT = "%FT%TZ"
-
-
-def execute_dsconf_command(dsconf_cmd, subcommands):
- """Execute dsconf command and return output and return code"""
-
- cmdline = dsconf_cmd + subcommands
- proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE)
- out, _ = proc.communicate()
- return out.decode('utf-8'), proc.returncode
-
-
-def get_dsconf_base_cmd(topo):
- """Return base dsconf command list"""
- return ['/usr/sbin/dsconf', topo.standalone.serverid,
- '-j', '-D', DN_DM, '-w', 'password', 'logging']
-
-
-def test_log_settings(topo):
- """Test each log setting can be set successfully
-
- :id: b800fd03-37f5-4e74-9af8-eeb07030eb52
- :setup: Standalone DS instance
- :steps:
- 1. Test each log's settings
- :expectedresults:
- 1. Success
- """
-
- dsconf_cmd = get_dsconf_base_cmd(topo)
- for log_type in ['access', 'audit', 'auditfail', 'error', 'security']:
- # Test "get" command
- output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'get'])
- assert rc == 0
- json_result = json.loads(output)
- default_location = json_result['Log name and location']
-
- # Log location
- output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set',
- 'location',
- f'/tmp/{log_type}'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set',
- 'location',
- default_location])
- assert rc == 0
-
- # Log levels
- if log_type == "access":
- # List levels
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'list-levels'])
- assert rc == 0
-
- # Set levels
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set', 'level',
- 'internal'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set', 'level',
- 'internal', 'entry'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set', 'level',
- 'internal', 'default'])
- assert rc == 0
-
- if log_type == "error":
- # List levels
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'list-levels'])
- assert rc == 0
-
- # Set levels
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set', 'level',
- 'plugin', 'replication'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set', 'level',
- 'default'])
- assert rc == 0
-
- # Log formats
- if log_type in ["access", "audit", "error"]:
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set',
- 'time-format', '%D'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set',
- 'time-format',
- DEFAULT_TIME_FORMAT])
- assert rc == 0
-
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set',
- 'log-format',
- 'json'])
- assert rc == 0
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set',
- 'log-format',
- 'default'])
- assert rc == 0
-
- # Audit log display attrs
- if log_type == "audit":
- output, rc = execute_dsconf_command(dsconf_cmd,
- [log_type, 'set',
- 'display-attrs', 'cn'])
- assert rc == 0
-
- # Common settings
- for attr, value in SETTINGS:
- if log_type == "auditfail" and attr.startswith("buffer"):
- # auditfail doesn't have a buffering settings
- continue
-
- if value is None:
- output, rc = execute_dsconf_command(dsconf_cmd, [log_type,
- 'set', attr])
- else:
- output, rc = execute_dsconf_command(dsconf_cmd, [log_type,
- 'set', attr, value])
- assert rc == 0
diff --git a/dirsrvtests/tests/suites/clu/dsconf_logging_test.py b/dirsrvtests/tests/suites/clu/dsconf_logging_test.py
new file mode 100644
index 000000000..ca3f71997
--- /dev/null
+++ b/dirsrvtests/tests/suites/clu/dsconf_logging_test.py
@@ -0,0 +1,164 @@
+# --- BEGIN COPYRIGHT BLOCK ---
+# Copyright (C) 2025 Red Hat, Inc.
+# All rights reserved.
+#
+# License: GPL (version 3 or any later version).
+# See LICENSE for details.
+# --- END COPYRIGHT BLOCK ---
+#
+import json
+import subprocess
+import logging
+import pytest
+from lib389._constants import DN_DM
+from lib389.topologies import topology_st as topo
+
+pytestmark = pytest.mark.tier1
+
+log = logging.getLogger(__name__)
+
+SETTINGS = [
+ ('logging-enabled', None),
+ ('logging-disabled', None),
+ ('mode', '700'),
+ ('compress-enabled', None),
+ ('compress-disabled', None),
+ ('buffering-enabled', None),
+ ('buffering-disabled', None),
+ ('max-logs', '4'),
+ ('max-logsize', '7'),
+ ('rotation-interval', '2'),
+ ('rotation-interval-unit', 'week'),
+ ('rotation-tod-enabled', None),
+ ('rotation-tod-disabled', None),
+ ('rotation-tod-hour', '12'),
+ ('rotation-tod-minute', '20'),
+ ('deletion-interval', '3'),
+ ('deletion-interval-unit', 'day'),
+ ('max-disk-space', '20'),
+ ('free-disk-space', '2'),
+]
+
+DEFAULT_TIME_FORMAT = "%FT%TZ"
+
+
+def execute_dsconf_command(dsconf_cmd, subcommands):
+ """Execute dsconf command and return output and return code"""
+
+ cmdline = dsconf_cmd + subcommands
+ proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ out, err = proc.communicate()
+
+ if proc.returncode != 0 and err:
+ log.error(f"Command failed: {' '.join(cmdline)}")
+ log.error(f"Stderr: {err.decode('utf-8')}")
+
+ return out.decode('utf-8'), proc.returncode
+
+
+def get_dsconf_base_cmd(topo):
+ """Return base dsconf command list"""
+ return ['/usr/sbin/dsconf', topo.standalone.serverid,
+ '-j', '-D', DN_DM, '-w', 'password', 'logging']
+
+
+@pytest.mark.parametrize("log_type", ['access', 'audit', 'auditfail', 'error', 'security'])
+def test_log_settings(topo, log_type):
+ """Test each log setting can be set successfully
+
+ :id: b800fd03-37f5-4e74-9af8-eeb07030eb52
+ :setup: Standalone DS instance
+ :steps:
+ 1. Test each log's settings
+ :expectedresults:
+ 1. Success
+ """
+
+ dsconf_cmd = get_dsconf_base_cmd(topo)
+
+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'get'])
+ assert rc == 0
+ json_result = json.loads(output)
+ default_location = json_result['Log name and location']
+
+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set',
+ 'location',
+ f'/tmp/{log_type}'])
+ assert rc == 0
+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set',
+ 'location',
+ default_location])
+ assert rc == 0
+
+ if log_type == "access":
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'list-levels'])
+ assert rc == 0
+
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set', 'level',
+ 'internal'])
+ assert rc == 0
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set', 'level',
+ 'internal', 'entry'])
+ assert rc == 0
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set', 'level',
+ 'internal', 'default'])
+ assert rc == 0
+
+ if log_type == "error":
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'list-levels'])
+ assert rc == 0
+
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set', 'level',
+ 'plugin', 'replication'])
+ assert rc == 0
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set', 'level',
+ 'default'])
+ assert rc == 0
+
+ if log_type in ["access", "audit", "error"]:
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set',
+ 'time-format', '%D'])
+ assert rc == 0
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set',
+ 'time-format',
+ DEFAULT_TIME_FORMAT])
+ assert rc == 0
+
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set',
+ 'log-format',
+ 'json'])
+ assert rc == 0, f"Failed to set {log_type} log-format to json: {output}"
+
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set',
+ 'log-format',
+ 'default'])
+ assert rc == 0, f"Failed to set {log_type} log-format to default: {output}"
+
+ if log_type == "audit":
+ output, rc = execute_dsconf_command(dsconf_cmd,
+ [log_type, 'set',
+ 'display-attrs', 'cn'])
+ assert rc == 0
+
+ for attr, value in SETTINGS:
+ if log_type == "auditfail" and attr.startswith("buffer"):
+ continue
+
+ if value is None:
+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type,
+ 'set', attr])
+ else:
+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type,
+ 'set', attr, value])
+ assert rc == 0
diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c
index 06792a55a..91ba23047 100644
--- a/ldap/servers/slapd/log.c
+++ b/ldap/servers/slapd/log.c
@@ -2949,7 +2949,7 @@ vslapd_log_error(
json_obj = json_object_new_object();
json_object_object_add(json_obj, "local_time", json_object_new_string(local_time));
json_object_object_add(json_obj, "severity", json_object_new_string(get_log_sev_name(sev_level, sev_name)));
- json_object_object_add(json_obj, "subsystem", json_object_new_string(subsystem));
+ json_object_object_add(json_obj, "subsystem", json_object_new_string(subsystem ? subsystem : ""));
json_object_object_add(json_obj, "msg", json_object_new_string(vbuf));
PR_snprintf(buffer, sizeof(buffer), "%s\n",
--
2.49.0

View file

@ -0,0 +1,98 @@
From d3eee2527912785505feba9bedb6d0ae988c69e5 Mon Sep 17 00:00:00 2001
From: Mark Reynolds <mreynolds@redhat.com>
Date: Wed, 23 Jul 2025 19:35:32 -0400
Subject: [PATCH] Issue 6895 - Crash if repl keep alive entry can not be
created
Description:
Heap use after free when logging that the replicaton keep-alive entry can not
be created. slapi_add_internal_pb() frees the slapi entry, then
we try and get the dn from the entry and we get a use-after-free crash.
Relates: https://github.com/389ds/389-ds-base/issues/6895
Reviewed by: spichugi(Thanks!)
---
ldap/servers/plugins/chainingdb/cb_config.c | 3 +--
ldap/servers/plugins/posix-winsync/posix-winsync.c | 1 -
ldap/servers/plugins/replication/repl5_init.c | 3 ---
ldap/servers/plugins/replication/repl5_replica.c | 8 ++++----
4 files changed, 5 insertions(+), 10 deletions(-)
diff --git a/ldap/servers/plugins/chainingdb/cb_config.c b/ldap/servers/plugins/chainingdb/cb_config.c
index 40a7088d7..24fa1bcb3 100644
--- a/ldap/servers/plugins/chainingdb/cb_config.c
+++ b/ldap/servers/plugins/chainingdb/cb_config.c
@@ -44,8 +44,7 @@ cb_config_add_dse_entries(cb_backend *cb, char **entries, char *string1, char *s
slapi_pblock_get(util_pb, SLAPI_PLUGIN_INTOP_RESULT, &res);
if (LDAP_SUCCESS != res && LDAP_ALREADY_EXISTS != res) {
slapi_log_err(SLAPI_LOG_ERR, CB_PLUGIN_SUBSYSTEM,
- "cb_config_add_dse_entries - Unable to add config entry (%s) to the DSE: %s\n",
- slapi_entry_get_dn(e),
+ "cb_config_add_dse_entries - Unable to add config entry to the DSE: %s\n",
ldap_err2string(res));
rc = res;
slapi_pblock_destroy(util_pb);
diff --git a/ldap/servers/plugins/posix-winsync/posix-winsync.c b/ldap/servers/plugins/posix-winsync/posix-winsync.c
index 51a55b643..3a002bb70 100644
--- a/ldap/servers/plugins/posix-winsync/posix-winsync.c
+++ b/ldap/servers/plugins/posix-winsync/posix-winsync.c
@@ -1626,7 +1626,6 @@ posix_winsync_end_update_cb(void *cbdata __attribute__((unused)),
"posix_winsync_end_update_cb: "
"add task entry\n");
}
- /* slapi_entry_free(e_task); */
slapi_pblock_destroy(pb);
pb = NULL;
posix_winsync_config_reset_MOFTaskCreated();
diff --git a/ldap/servers/plugins/replication/repl5_init.c b/ldap/servers/plugins/replication/repl5_init.c
index 8bc0b5372..5047fb8dc 100644
--- a/ldap/servers/plugins/replication/repl5_init.c
+++ b/ldap/servers/plugins/replication/repl5_init.c
@@ -682,7 +682,6 @@ create_repl_schema_policy(void)
repl_schema_top,
ldap_err2string(return_value));
rc = -1;
- slapi_entry_free(e); /* The entry was not consumed */
goto done;
}
slapi_pblock_destroy(pb);
@@ -703,7 +702,6 @@ create_repl_schema_policy(void)
repl_schema_supplier,
ldap_err2string(return_value));
rc = -1;
- slapi_entry_free(e); /* The entry was not consumed */
goto done;
}
slapi_pblock_destroy(pb);
@@ -724,7 +722,6 @@ create_repl_schema_policy(void)
repl_schema_consumer,
ldap_err2string(return_value));
rc = -1;
- slapi_entry_free(e); /* The entry was not consumed */
goto done;
}
slapi_pblock_destroy(pb);
diff --git a/ldap/servers/plugins/replication/repl5_replica.c b/ldap/servers/plugins/replication/repl5_replica.c
index 59062b46b..a97c807e9 100644
--- a/ldap/servers/plugins/replication/repl5_replica.c
+++ b/ldap/servers/plugins/replication/repl5_replica.c
@@ -465,10 +465,10 @@ replica_subentry_create(const char *repl_root, ReplicaId rid)
if (return_value != LDAP_SUCCESS &&
return_value != LDAP_ALREADY_EXISTS &&
return_value != LDAP_REFERRAL /* CONSUMER */) {
- slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name, "replica_subentry_create - Unable to "
- "create replication keep alive entry %s: error %d - %s\n",
- slapi_entry_get_dn_const(e),
- return_value, ldap_err2string(return_value));
+ slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name, "replica_subentry_create - "
+ "Unable to create replication keep alive entry 'cn=%s %d,%s': error %d - %s\n",
+ KEEP_ALIVE_ENTRY, rid, repl_root,
+ return_value, ldap_err2string(return_value));
rc = -1;
goto done;
}
--
2.49.0

View file

@ -0,0 +1,814 @@
From e430e1849d40387714fd4c91613eb4bb11f211bb Mon Sep 17 00:00:00 2001
From: Simon Pichugin <spichugi@redhat.com>
Date: Mon, 28 Jul 2025 15:41:29 -0700
Subject: [PATCH] Issue 6884 - Mask password hashes in audit logs (#6885)
Description: Fix the audit log functionality to mask password hash values for
userPassword, nsslapd-rootpw, nsmultiplexorcredentials, nsds5ReplicaCredentials,
and nsds5ReplicaBootstrapCredentials attributes in ADD and MODIFY operations.
Update auditlog.c to detect password attributes and replace their values with
asterisks (**********************) in both LDIF and JSON audit log formats.
Add a comprehensive test suite audit_password_masking_test.py to verify
password masking works correctly across all log formats and operation types.
Fixes: https://github.com/389ds/389-ds-base/issues/6884
Reviewed by: @mreynolds389, @vashirov (Thanks!!)
---
.../logging/audit_password_masking_test.py | 501 ++++++++++++++++++
ldap/servers/slapd/auditlog.c | 170 +++++-
ldap/servers/slapd/slapi-private.h | 1 +
src/lib389/lib389/chaining.py | 3 +-
4 files changed, 652 insertions(+), 23 deletions(-)
create mode 100644 dirsrvtests/tests/suites/logging/audit_password_masking_test.py
diff --git a/dirsrvtests/tests/suites/logging/audit_password_masking_test.py b/dirsrvtests/tests/suites/logging/audit_password_masking_test.py
new file mode 100644
index 000000000..3b6a54849
--- /dev/null
+++ b/dirsrvtests/tests/suites/logging/audit_password_masking_test.py
@@ -0,0 +1,501 @@
+# --- BEGIN COPYRIGHT BLOCK ---
+# Copyright (C) 2025 Red Hat, Inc.
+# All rights reserved.
+#
+# License: GPL (version 3 or any later version).
+# See LICENSE for details.
+# --- END COPYRIGHT BLOCK ---
+#
+import logging
+import pytest
+import os
+import re
+import time
+import ldap
+from lib389._constants import DEFAULT_SUFFIX, DN_DM, PW_DM
+from lib389.topologies import topology_m2 as topo
+from lib389.idm.user import UserAccounts
+from lib389.dirsrv_log import DirsrvAuditJSONLog
+from lib389.plugins import ChainingBackendPlugin
+from lib389.chaining import ChainingLinks
+from lib389.agreement import Agreements
+from lib389.replica import ReplicationManager, Replicas
+from lib389.idm.directorymanager import DirectoryManager
+
+log = logging.getLogger(__name__)
+
+MASKED_PASSWORD = "**********************"
+TEST_PASSWORD = "MySecret123"
+TEST_PASSWORD_2 = "NewPassword789"
+TEST_PASSWORD_3 = "NewPassword101"
+
+
+def setup_audit_logging(inst, log_format='default', display_attrs=None):
+ """Configure audit logging settings"""
+ inst.config.replace('nsslapd-auditlog-logbuffering', 'off')
+ inst.config.replace('nsslapd-auditlog-logging-enabled', 'on')
+ inst.config.replace('nsslapd-auditlog-log-format', log_format)
+
+ if display_attrs is not None:
+ inst.config.replace('nsslapd-auditlog-display-attrs', display_attrs)
+
+ inst.deleteAuditLogs()
+
+
+def check_password_masked(inst, log_format, expected_password, actual_password):
+ """Helper function to check password masking in audit logs"""
+
+ time.sleep(1) # Allow log to flush
+
+ # List of all password/credential attributes that should be masked
+ password_attributes = [
+ 'userPassword',
+ 'nsslapd-rootpw',
+ 'nsmultiplexorcredentials',
+ 'nsDS5ReplicaCredentials',
+ 'nsDS5ReplicaBootstrapCredentials'
+ ]
+
+ # Get password schemes to check for hash leakage
+ user_password_scheme = inst.config.get_attr_val_utf8('passwordStorageScheme')
+ root_password_scheme = inst.config.get_attr_val_utf8('nsslapd-rootpwstoragescheme')
+
+ if log_format == 'json':
+ # Check JSON format logs
+ audit_log = DirsrvAuditJSONLog(inst)
+ log_lines = audit_log.readlines()
+
+ found_masked = False
+ found_actual = False
+ found_hashed = False
+
+ for line in log_lines:
+ # Check if any password attribute is present in the line
+ for attr in password_attributes:
+ if attr in line:
+ if expected_password in line:
+ found_masked = True
+ if actual_password in line:
+ found_actual = True
+ # Check for password scheme indicators (hashed passwords)
+ if user_password_scheme and f'{{{user_password_scheme}}}' in line:
+ found_hashed = True
+ if root_password_scheme and f'{{{root_password_scheme}}}' in line:
+ found_hashed = True
+ break # Found a password attribute, no need to check others for this line
+
+ else:
+ # Check LDIF format logs
+ found_masked = False
+ found_actual = False
+ found_hashed = False
+
+ # Check each password attribute for masked password
+ for attr in password_attributes:
+ if inst.ds_audit_log.match(f"{attr}: {re.escape(expected_password)}"):
+ found_masked = True
+ if inst.ds_audit_log.match(f"{attr}: {actual_password}"):
+ found_actual = True
+
+ # Check for hashed passwords in LDIF format
+ if user_password_scheme:
+ if inst.ds_audit_log.match(f"userPassword: {{{user_password_scheme}}}"):
+ found_hashed = True
+ if root_password_scheme:
+ if inst.ds_audit_log.match(f"nsslapd-rootpw: {{{root_password_scheme}}}"):
+ found_hashed = True
+
+ # Delete audit logs to avoid interference with other tests
+ # We need to reset the root password to default as deleteAuditLogs()
+ # opens a new connection with the default password
+ dm = DirectoryManager(inst)
+ dm.change_password(PW_DM)
+ inst.deleteAuditLogs()
+
+ return found_masked, found_actual, found_hashed
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "userPassword"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "userPassword")
+])
+def test_password_masking_add_operation(topo, log_format, display_attrs):
+ """Test password masking in ADD operations
+
+ :id: 4358bd75-bcc7-401c-b492-d3209b10412d
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Add user with password
+ 3. Check that password is masked in audit log
+ 4. Verify actual password does not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Password should be masked with asterisks
+ 4. Actual password should not be found in log
+ """
+ inst = topo.ms['supplier1']
+ setup_audit_logging(inst, log_format, display_attrs)
+
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
+ user = None
+
+ try:
+ user = users.create(properties={
+ 'uid': 'test_add_pwd_mask',
+ 'cn': 'Test Add User',
+ 'sn': 'User',
+ 'uidNumber': '1000',
+ 'gidNumber': '1000',
+ 'homeDirectory': '/home/test_add',
+ 'userPassword': TEST_PASSWORD
+ })
+
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
+
+ assert found_masked, f"Masked password not found in {log_format} ADD operation"
+ assert not found_actual, f"Actual password found in {log_format} ADD log (should be masked)"
+ assert not found_hashed, f"Hashed password found in {log_format} ADD log (should be masked)"
+
+ finally:
+ if user is not None:
+ try:
+ user.delete()
+ except:
+ pass
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "userPassword"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "userPassword")
+])
+def test_password_masking_modify_operation(topo, log_format, display_attrs):
+ """Test password masking in MODIFY operations
+
+ :id: e6963aa9-7609-419c-aae2-1d517aa434bd
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Add user without password
+ 3. Add password via MODIFY operation
+ 4. Check that password is masked in audit log
+ 5. Modify password to new value
+ 6. Check that new password is also masked
+ 7. Verify actual passwords do not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Success
+ 4. Password should be masked with asterisks
+ 5. Success
+ 6. New password should be masked with asterisks
+ 7. No actual password values should be found in log
+ """
+ inst = topo.ms['supplier1']
+ setup_audit_logging(inst, log_format, display_attrs)
+
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
+ user = None
+
+ try:
+ user = users.create(properties={
+ 'uid': 'test_modify_pwd_mask',
+ 'cn': 'Test Modify User',
+ 'sn': 'User',
+ 'uidNumber': '2000',
+ 'gidNumber': '2000',
+ 'homeDirectory': '/home/test_modify'
+ })
+
+ user.replace('userPassword', TEST_PASSWORD)
+
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
+ assert found_masked, f"Masked password not found in {log_format} MODIFY operation (first password)"
+ assert not found_actual, f"Actual password found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed, f"Hashed password found in {log_format} MODIFY log (should be masked)"
+
+ user.replace('userPassword', TEST_PASSWORD_2)
+
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
+ assert found_masked_2, f"Masked password not found in {log_format} MODIFY operation (second password)"
+ assert not found_actual_2, f"Second actual password found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed_2, f"Second hashed password found in {log_format} MODIFY log (should be masked)"
+
+ finally:
+ if user is not None:
+ try:
+ user.delete()
+ except:
+ pass
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "nsslapd-rootpw"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "nsslapd-rootpw")
+])
+def test_password_masking_rootpw_modify_operation(topo, log_format, display_attrs):
+ """Test password masking for nsslapd-rootpw MODIFY operations
+
+ :id: ec8c9fd4-56ba-4663-ab65-58efb3b445e4
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Modify nsslapd-rootpw in configuration
+ 3. Check that root password is masked in audit log
+ 4. Modify root password to new value
+ 5. Check that new root password is also masked
+ 6. Verify actual root passwords do not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Root password should be masked with asterisks
+ 4. Success
+ 5. New root password should be masked with asterisks
+ 6. No actual root password values should be found in log
+ """
+ inst = topo.ms['supplier1']
+ setup_audit_logging(inst, log_format, display_attrs)
+ dm = DirectoryManager(inst)
+
+ try:
+ dm.change_password(TEST_PASSWORD)
+ dm.rebind(TEST_PASSWORD)
+
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
+ assert found_masked, f"Masked root password not found in {log_format} MODIFY operation (first root password)"
+ assert not found_actual, f"Actual root password found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed, f"Hashed root password found in {log_format} MODIFY log (should be masked)"
+
+ dm.change_password(TEST_PASSWORD_2)
+ dm.rebind(TEST_PASSWORD_2)
+
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
+ assert found_masked_2, f"Masked root password not found in {log_format} MODIFY operation (second root password)"
+ assert not found_actual_2, f"Second actual root password found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed_2, f"Second hashed root password found in {log_format} MODIFY log (should be masked)"
+
+ finally:
+ dm.change_password(PW_DM)
+ dm.rebind(PW_DM)
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "nsmultiplexorcredentials"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "nsmultiplexorcredentials")
+])
+def test_password_masking_multiplexor_credentials(topo, log_format, display_attrs):
+ """Test password masking for nsmultiplexorcredentials in chaining/multiplexor configurations
+
+ :id: 161a9498-b248-4926-90be-a696a36ed36e
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Create a chaining backend configuration entry with nsmultiplexorcredentials
+ 3. Check that multiplexor credentials are masked in audit log
+ 4. Modify the credentials
+ 5. Check that updated credentials are also masked
+ 6. Verify actual credentials do not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Multiplexor credentials should be masked with asterisks
+ 4. Success
+ 5. Updated credentials should be masked with asterisks
+ 6. No actual credential values should be found in log
+ """
+ inst = topo.ms['supplier1']
+ setup_audit_logging(inst, log_format, display_attrs)
+
+ # Enable chaining plugin and create chaining link
+ chain_plugin = ChainingBackendPlugin(inst)
+ chain_plugin.enable()
+
+ chains = ChainingLinks(inst)
+ chain = None
+
+ try:
+ # Create chaining link with multiplexor credentials
+ chain = chains.create(properties={
+ 'cn': 'testchain',
+ 'nsfarmserverurl': 'ldap://localhost:389/',
+ 'nsslapd-suffix': 'dc=example,dc=com',
+ 'nsmultiplexorbinddn': 'cn=manager',
+ 'nsmultiplexorcredentials': TEST_PASSWORD,
+ 'nsCheckLocalACI': 'on',
+ 'nsConnectionLife': '30',
+ })
+
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
+ assert found_masked, f"Masked multiplexor credentials not found in {log_format} ADD operation"
+ assert not found_actual, f"Actual multiplexor credentials found in {log_format} ADD log (should be masked)"
+ assert not found_hashed, f"Hashed multiplexor credentials found in {log_format} ADD log (should be masked)"
+
+ # Modify the credentials
+ chain.replace('nsmultiplexorcredentials', TEST_PASSWORD_2)
+
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
+ assert found_masked_2, f"Masked multiplexor credentials not found in {log_format} MODIFY operation"
+ assert not found_actual_2, f"Actual multiplexor credentials found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed_2, f"Hashed multiplexor credentials found in {log_format} MODIFY log (should be masked)"
+
+ finally:
+ chain_plugin.disable()
+ if chain is not None:
+ inst.delete_branch_s(chain.dn, ldap.SCOPE_ONELEVEL)
+ chain.delete()
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "nsDS5ReplicaCredentials"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "nsDS5ReplicaCredentials")
+])
+def test_password_masking_replica_credentials(topo, log_format, display_attrs):
+ """Test password masking for nsDS5ReplicaCredentials in replication agreements
+
+ :id: 7bf9e612-1b7c-49af-9fc0-de4c7df84b2a
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Create a replication agreement entry with nsDS5ReplicaCredentials
+ 3. Check that replica credentials are masked in audit log
+ 4. Modify the credentials
+ 5. Check that updated credentials are also masked
+ 6. Verify actual credentials do not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Replica credentials should be masked with asterisks
+ 4. Success
+ 5. Updated credentials should be masked with asterisks
+ 6. No actual credential values should be found in log
+ """
+ inst = topo.ms['supplier2']
+ setup_audit_logging(inst, log_format, display_attrs)
+ agmt = None
+
+ try:
+ replicas = Replicas(inst)
+ replica = replicas.get(DEFAULT_SUFFIX)
+ agmts = replica.get_agreements()
+ agmt = agmts.create(properties={
+ 'cn': 'testagmt',
+ 'nsDS5ReplicaHost': 'localhost',
+ 'nsDS5ReplicaPort': '389',
+ 'nsDS5ReplicaBindDN': 'cn=replication manager,cn=config',
+ 'nsDS5ReplicaCredentials': TEST_PASSWORD,
+ 'nsDS5ReplicaRoot': DEFAULT_SUFFIX
+ })
+
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
+ assert found_masked, f"Masked replica credentials not found in {log_format} ADD operation"
+ assert not found_actual, f"Actual replica credentials found in {log_format} ADD log (should be masked)"
+ assert not found_hashed, f"Hashed replica credentials found in {log_format} ADD log (should be masked)"
+
+ # Modify the credentials
+ agmt.replace('nsDS5ReplicaCredentials', TEST_PASSWORD_2)
+
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
+ assert found_masked_2, f"Masked replica credentials not found in {log_format} MODIFY operation"
+ assert not found_actual_2, f"Actual replica credentials found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed_2, f"Hashed replica credentials found in {log_format} MODIFY log (should be masked)"
+
+ finally:
+ if agmt is not None:
+ agmt.delete()
+
+
+@pytest.mark.parametrize("log_format,display_attrs", [
+ ("default", None),
+ ("default", "*"),
+ ("default", "nsDS5ReplicaBootstrapCredentials"),
+ ("json", None),
+ ("json", "*"),
+ ("json", "nsDS5ReplicaBootstrapCredentials")
+])
+def test_password_masking_bootstrap_credentials(topo, log_format, display_attrs):
+ """Test password masking for nsDS5ReplicaCredentials and nsDS5ReplicaBootstrapCredentials in replication agreements
+
+ :id: 248bd418-ffa4-4733-963d-2314c60b7c5b
+ :parametrized: yes
+ :setup: Standalone Instance
+ :steps:
+ 1. Configure audit logging format
+ 2. Create a replication agreement entry with both nsDS5ReplicaCredentials and nsDS5ReplicaBootstrapCredentials
+ 3. Check that both credentials are masked in audit log
+ 4. Modify both credentials
+ 5. Check that both updated credentials are also masked
+ 6. Verify actual credentials do not appear in log
+ :expectedresults:
+ 1. Success
+ 2. Success
+ 3. Both credentials should be masked with asterisks
+ 4. Success
+ 5. Both updated credentials should be masked with asterisks
+ 6. No actual credential values should be found in log
+ """
+ inst = topo.ms['supplier2']
+ setup_audit_logging(inst, log_format, display_attrs)
+ agmt = None
+
+ try:
+ replicas = Replicas(inst)
+ replica = replicas.get(DEFAULT_SUFFIX)
+ agmts = replica.get_agreements()
+ agmt = agmts.create(properties={
+ 'cn': 'testbootstrapagmt',
+ 'nsDS5ReplicaHost': 'localhost',
+ 'nsDS5ReplicaPort': '389',
+ 'nsDS5ReplicaBindDN': 'cn=replication manager,cn=config',
+ 'nsDS5ReplicaCredentials': TEST_PASSWORD,
+ 'nsDS5replicabootstrapbinddn': 'cn=bootstrap manager,cn=config',
+ 'nsDS5ReplicaBootstrapCredentials': TEST_PASSWORD_2,
+ 'nsDS5ReplicaRoot': DEFAULT_SUFFIX
+ })
+
+ found_masked_bootstrap, found_actual_bootstrap, found_hashed_bootstrap = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
+ assert found_masked_bootstrap, f"Masked bootstrap credentials not found in {log_format} ADD operation"
+ assert not found_actual_bootstrap, f"Actual bootstrap credentials found in {log_format} ADD log (should be masked)"
+ assert not found_hashed_bootstrap, f"Hashed bootstrap credentials found in {log_format} ADD log (should be masked)"
+
+ agmt.replace('nsDS5ReplicaBootstrapCredentials', TEST_PASSWORD_3)
+
+ found_masked_bootstrap_2, found_actual_bootstrap_2, found_hashed_bootstrap_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_3)
+ assert found_masked_bootstrap_2, f"Masked bootstrap credentials not found in {log_format} MODIFY operation"
+ assert not found_actual_bootstrap_2, f"Actual bootstrap credentials found in {log_format} MODIFY log (should be masked)"
+ assert not found_hashed_bootstrap_2, f"Hashed bootstrap credentials found in {log_format} MODIFY log (should be masked)"
+
+ finally:
+ if agmt is not None:
+ agmt.delete()
+
+
+
+if __name__ == '__main__':
+ CURRENT_FILE = os.path.realpath(__file__)
+ pytest.main(["-s", CURRENT_FILE])
\ No newline at end of file
diff --git a/ldap/servers/slapd/auditlog.c b/ldap/servers/slapd/auditlog.c
index 1121aef35..7b591e072 100644
--- a/ldap/servers/slapd/auditlog.c
+++ b/ldap/servers/slapd/auditlog.c
@@ -39,6 +39,89 @@ static void write_audit_file(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype,
static const char *modrdn_changes[4];
+/* Helper function to check if an attribute is a password that needs masking */
+static int
+is_password_attribute(const char *attr_name)
+{
+ return (strcasecmp(attr_name, SLAPI_USERPWD_ATTR) == 0 ||
+ strcasecmp(attr_name, CONFIG_ROOTPW_ATTRIBUTE) == 0 ||
+ strcasecmp(attr_name, SLAPI_MB_CREDENTIALS) == 0 ||
+ strcasecmp(attr_name, SLAPI_REP_CREDENTIALS) == 0 ||
+ strcasecmp(attr_name, SLAPI_REP_BOOTSTRAP_CREDENTIALS) == 0);
+}
+
+/* Helper function to create a masked string representation of an entry */
+static char *
+create_masked_entry_string(Slapi_Entry *original_entry, int *len)
+{
+ Slapi_Attr *attr = NULL;
+ char *entry_str = NULL;
+ char *current_pos = NULL;
+ char *line_start = NULL;
+ char *next_line = NULL;
+ char *colon_pos = NULL;
+ int has_password_attrs = 0;
+
+ if (original_entry == NULL) {
+ return NULL;
+ }
+
+ /* Single pass through attributes to check for password attributes */
+ for (slapi_entry_first_attr(original_entry, &attr); attr != NULL;
+ slapi_entry_next_attr(original_entry, attr, &attr)) {
+
+ char *attr_name = NULL;
+ slapi_attr_get_type(attr, &attr_name);
+
+ if (is_password_attribute(attr_name)) {
+ has_password_attrs = 1;
+ break;
+ }
+ }
+
+ /* If no password attributes, return original string - no masking needed */
+ entry_str = slapi_entry2str(original_entry, len);
+ if (!has_password_attrs) {
+ return entry_str;
+ }
+
+ /* Process the string in-place, replacing password values */
+ current_pos = entry_str;
+ while ((line_start = current_pos) != NULL && *line_start != '\0') {
+ /* Find the end of current line */
+ next_line = strchr(line_start, '\n');
+ if (next_line != NULL) {
+ *next_line = '\0'; /* Temporarily terminate line */
+ current_pos = next_line + 1;
+ } else {
+ current_pos = NULL; /* Last line */
+ }
+
+ /* Find the colon that separates attribute name from value */
+ colon_pos = strchr(line_start, ':');
+ if (colon_pos != NULL) {
+ char saved_colon = *colon_pos;
+ *colon_pos = '\0'; /* Temporarily null-terminate attribute name */
+
+ /* Check if this is a password attribute that needs masking */
+ if (is_password_attribute(line_start)) {
+ strcpy(colon_pos + 1, " **********************");
+ }
+
+ *colon_pos = saved_colon; /* Restore colon */
+ }
+
+ /* Restore newline if it was there */
+ if (next_line != NULL) {
+ *next_line = '\n';
+ }
+ }
+
+ /* Update length since we may have shortened the string */
+ *len = strlen(entry_str);
+ return entry_str; /* Return the modified original string */
+}
+
void
write_audit_log_entry(Slapi_PBlock *pb)
{
@@ -282,10 +365,31 @@ add_entry_attrs_ext(Slapi_Entry *entry, lenstr *l, PRBool use_json, json_object
{
slapi_entry_attr_find(entry, req_attr, &entry_attr);
if (entry_attr) {
- if (use_json) {
- log_entry_attr_json(entry_attr, req_attr, id_list);
+ if (strcmp(req_attr, PSEUDO_ATTR_UNHASHEDUSERPASSWORD) == 0) {
+ /* Do not write the unhashed clear-text password */
+ continue;
+ }
+
+ /* Check if this is a password attribute that needs masking */
+ if (is_password_attribute(req_attr)) {
+ /* userpassword/rootdn password - mask the value */
+ if (use_json) {
+ json_object *secret_obj = json_object_new_object();
+ json_object_object_add(secret_obj, req_attr,
+ json_object_new_string("**********************"));
+ json_object_array_add(id_list, secret_obj);
+ } else {
+ addlenstr(l, "#");
+ addlenstr(l, req_attr);
+ addlenstr(l, ": **********************\n");
+ }
} else {
- log_entry_attr(entry_attr, req_attr, l);
+ /* Regular attribute - log normally */
+ if (use_json) {
+ log_entry_attr_json(entry_attr, req_attr, id_list);
+ } else {
+ log_entry_attr(entry_attr, req_attr, l);
+ }
}
}
}
@@ -300,9 +404,7 @@ add_entry_attrs_ext(Slapi_Entry *entry, lenstr *l, PRBool use_json, json_object
continue;
}
- if (strcasecmp(attr, SLAPI_USERPWD_ATTR) == 0 ||
- strcasecmp(attr, CONFIG_ROOTPW_ATTRIBUTE) == 0)
- {
+ if (is_password_attribute(attr)) {
/* userpassword/rootdn password - mask the value */
if (use_json) {
json_object *secret_obj = json_object_new_object();
@@ -312,7 +414,7 @@ add_entry_attrs_ext(Slapi_Entry *entry, lenstr *l, PRBool use_json, json_object
} else {
addlenstr(l, "#");
addlenstr(l, attr);
- addlenstr(l, ": ****************************\n");
+ addlenstr(l, ": **********************\n");
}
continue;
}
@@ -481,6 +583,9 @@ write_audit_file_json(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype,
}
}
+ /* Check if this is a password attribute that needs masking */
+ int is_password_attr = is_password_attribute(mods[j]->mod_type);
+
mod = json_object_new_object();
switch (operationtype) {
case LDAP_MOD_ADD:
@@ -505,7 +610,12 @@ write_audit_file_json(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype,
json_object *val_list = NULL;
val_list = json_object_new_array();
for (size_t i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) {
- json_object_array_add(val_list, json_object_new_string(mods[j]->mod_bvalues[i]->bv_val));
+ if (is_password_attr) {
+ /* Mask password values */
+ json_object_array_add(val_list, json_object_new_string("**********************"));
+ } else {
+ json_object_array_add(val_list, json_object_new_string(mods[j]->mod_bvalues[i]->bv_val));
+ }
}
json_object_object_add(mod, "values", val_list);
}
@@ -517,8 +627,11 @@ write_audit_file_json(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype,
}
case SLAPI_OPERATION_ADD: {
int len;
+
e = change;
- tmp = slapi_entry2str(e, &len);
+
+ /* Create a masked string representation for password attributes */
+ tmp = create_masked_entry_string(e, &len);
tmpsave = tmp;
while ((tmp = strchr(tmp, '\n')) != NULL) {
tmp++;
@@ -665,6 +778,10 @@ write_audit_file(
break;
}
}
+
+ /* Check if this is a password attribute that needs masking */
+ int is_password_attr = is_password_attribute(mods[j]->mod_type);
+
switch (operationtype) {
case LDAP_MOD_ADD:
addlenstr(l, "add: ");
@@ -689,18 +806,27 @@ write_audit_file(
break;
}
if (operationtype != LDAP_MOD_IGNORE) {
- for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) {
- char *buf, *bufp;
- len = strlen(mods[j]->mod_type);
- len = LDIF_SIZE_NEEDED(len, mods[j]->mod_bvalues[i]->bv_len) + 1;
- buf = slapi_ch_malloc(len);
- bufp = buf;
- slapi_ldif_put_type_and_value_with_options(&bufp, mods[j]->mod_type,
- mods[j]->mod_bvalues[i]->bv_val,
- mods[j]->mod_bvalues[i]->bv_len, 0);
- *bufp = '\0';
- addlenstr(l, buf);
- slapi_ch_free((void **)&buf);
+ if (is_password_attr) {
+ /* Add masked password */
+ for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) {
+ addlenstr(l, mods[j]->mod_type);
+ addlenstr(l, ": **********************\n");
+ }
+ } else {
+ /* Add actual values for non-password attributes */
+ for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) {
+ char *buf, *bufp;
+ len = strlen(mods[j]->mod_type);
+ len = LDIF_SIZE_NEEDED(len, mods[j]->mod_bvalues[i]->bv_len) + 1;
+ buf = slapi_ch_malloc(len);
+ bufp = buf;
+ slapi_ldif_put_type_and_value_with_options(&bufp, mods[j]->mod_type,
+ mods[j]->mod_bvalues[i]->bv_val,
+ mods[j]->mod_bvalues[i]->bv_len, 0);
+ *bufp = '\0';
+ addlenstr(l, buf);
+ slapi_ch_free((void **)&buf);
+ }
}
}
addlenstr(l, "-\n");
@@ -711,7 +837,7 @@ write_audit_file(
e = change;
addlenstr(l, attr_changetype);
addlenstr(l, ": add\n");
- tmp = slapi_entry2str(e, &len);
+ tmp = create_masked_entry_string(e, &len);
tmpsave = tmp;
while ((tmp = strchr(tmp, '\n')) != NULL) {
tmp++;
diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h
index e9abf8b75..02f22fd2d 100644
--- a/ldap/servers/slapd/slapi-private.h
+++ b/ldap/servers/slapd/slapi-private.h
@@ -848,6 +848,7 @@ void task_cleanup(void);
/* for reversible encyrption */
#define SLAPI_MB_CREDENTIALS "nsmultiplexorcredentials"
#define SLAPI_REP_CREDENTIALS "nsds5ReplicaCredentials"
+#define SLAPI_REP_BOOTSTRAP_CREDENTIALS "nsds5ReplicaBootstrapCredentials"
int pw_rever_encode(Slapi_Value **vals, char *attr_name);
int pw_rever_decode(char *cipher, char **plain, const char *attr_name);
diff --git a/src/lib389/lib389/chaining.py b/src/lib389/lib389/chaining.py
index 533b83ebf..33ae78c8b 100644
--- a/src/lib389/lib389/chaining.py
+++ b/src/lib389/lib389/chaining.py
@@ -134,7 +134,7 @@ class ChainingLink(DSLdapObject):
"""
# Create chaining entry
- super(ChainingLink, self).create(rdn, properties, basedn)
+ link = super(ChainingLink, self).create(rdn, properties, basedn)
# Create mapping tree entry
dn_comps = ldap.explode_dn(properties['nsslapd-suffix'][0])
@@ -149,6 +149,7 @@ class ChainingLink(DSLdapObject):
self._mts.ensure_state(properties=mt_properties)
except ldap.ALREADY_EXISTS:
pass
+ return link
class ChainingLinks(DSLdapObjects):
--
2.49.0

View file

@ -0,0 +1,262 @@
From 572fe6c91fda1c2cfd3afee894c922edccf9c1f1 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Wed, 16 Jul 2025 11:22:30 +0200
Subject: [PATCH] Issue 6778 - Memory leak in
roles_cache_create_object_from_entry part 2
Bug Description:
Everytime a role with scope DN is processed, we leak rolescopeDN.
Fix Description:
* Initialize all pointer variables to NULL
* Add additional NULL checks
* Free rolescopeDN
* Move test_rewriter_with_invalid_filter before the DB contains 90k entries
* Use task.wait() for import task completion instead of parsing logs,
increase the timeout
Fixes: https://github.com/389ds/389-ds-base/issues/6778
Reviewed by: @progier389 (Thanks!)
---
dirsrvtests/tests/suites/roles/basic_test.py | 164 +++++++++----------
ldap/servers/plugins/roles/roles_cache.c | 10 +-
2 files changed, 82 insertions(+), 92 deletions(-)
diff --git a/dirsrvtests/tests/suites/roles/basic_test.py b/dirsrvtests/tests/suites/roles/basic_test.py
index d92d6f0c3..ec208bae9 100644
--- a/dirsrvtests/tests/suites/roles/basic_test.py
+++ b/dirsrvtests/tests/suites/roles/basic_test.py
@@ -510,6 +510,76 @@ def test_vattr_on_managed_role(topo, request):
request.addfinalizer(fin)
+def test_rewriter_with_invalid_filter(topo, request):
+ """Test that server does not crash when having
+ invalid filter in filtered role
+
+ :id: 5013b0b2-0af6-11f0-8684-482ae39447e5
+ :setup: standalone server
+ :steps:
+ 1. Setup filtered role with good filter
+ 2. Setup nsrole rewriter
+ 3. Restart the server
+ 4. Search for entries
+ 5. Setup filtered role with bad filter
+ 6. Search for entries
+ :expectedresults:
+ 1. Operation should succeed
+ 2. Operation should succeed
+ 3. Operation should succeed
+ 4. Operation should succeed
+ 5. Operation should succeed
+ 6. Operation should succeed
+ """
+ inst = topo.standalone
+ entries = []
+
+ def fin():
+ inst.start()
+ for entry in entries:
+ entry.delete()
+ request.addfinalizer(fin)
+
+ # Setup filtered role
+ roles = FilteredRoles(inst, f'ou=people,{DEFAULT_SUFFIX}')
+ filter_ko = '(&((objectClass=top)(objectClass=nsPerson))'
+ filter_ok = '(&(objectClass=top)(objectClass=nsPerson))'
+ role_properties = {
+ 'cn': 'TestFilteredRole',
+ 'nsRoleFilter': filter_ok,
+ 'description': 'Test good filter',
+ }
+ role = roles.create(properties=role_properties)
+ entries.append(role)
+
+ # Setup nsrole rewriter
+ rewriters = Rewriters(inst)
+ rewriter_properties = {
+ "cn": "nsrole",
+ "nsslapd-libpath": 'libroles-plugin',
+ "nsslapd-filterrewriter": 'role_nsRole_filter_rewriter',
+ }
+ rewriter = rewriters.ensure_state(properties=rewriter_properties)
+ entries.append(rewriter)
+
+ # Restart thge instance
+ inst.restart()
+
+ # Search for entries
+ entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn)
+
+ # Set bad filter
+ role_properties = {
+ 'cn': 'TestFilteredRole',
+ 'nsRoleFilter': filter_ko,
+ 'description': 'Test bad filter',
+ }
+ role.ensure_state(properties=role_properties)
+
+ # Search for entries
+ entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn)
+
+
def test_managed_and_filtered_role_rewrite(topo, request):
"""Test that filter components containing 'nsrole=xxx'
are reworked if xxx is either a filtered role or a managed
@@ -581,17 +651,11 @@ def test_managed_and_filtered_role_rewrite(topo, request):
PARENT="ou=people,%s" % DEFAULT_SUFFIX
dbgen_users(topo.standalone, 90000, import_ldif, DEFAULT_SUFFIX, entry_name=RDN, generic=True, parent=PARENT)
- # online import
+ # Online import
import_task = ImportTask(topo.standalone)
import_task.import_suffix_from_ldif(ldiffile=import_ldif, suffix=DEFAULT_SUFFIX)
- # Check for up to 200sec that the completion
- for i in range(1, 20):
- if len(topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9000.*')) > 0:
- break
- time.sleep(10)
- import_complete = topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9000.*')
- assert (len(import_complete) == 1)
-
+ import_task.wait(timeout=400)
+ assert import_task.get_exit_code() == 0
# Restart server
topo.standalone.restart()
@@ -715,17 +779,11 @@ def test_not_such_entry_role_rewrite(topo, request):
PARENT="ou=people,%s" % DEFAULT_SUFFIX
dbgen_users(topo.standalone, 91000, import_ldif, DEFAULT_SUFFIX, entry_name=RDN, generic=True, parent=PARENT)
- # online import
+ # Online import
import_task = ImportTask(topo.standalone)
import_task.import_suffix_from_ldif(ldiffile=import_ldif, suffix=DEFAULT_SUFFIX)
- # Check for up to 200sec that the completion
- for i in range(1, 20):
- if len(topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9100.*')) > 0:
- break
- time.sleep(10)
- import_complete = topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9100.*')
- assert (len(import_complete) == 1)
-
+ import_task.wait(timeout=400)
+ assert import_task.get_exit_code() == 0
# Restart server
topo.standalone.restart()
@@ -769,76 +827,6 @@ def test_not_such_entry_role_rewrite(topo, request):
request.addfinalizer(fin)
-def test_rewriter_with_invalid_filter(topo, request):
- """Test that server does not crash when having
- invalid filter in filtered role
-
- :id: 5013b0b2-0af6-11f0-8684-482ae39447e5
- :setup: standalone server
- :steps:
- 1. Setup filtered role with good filter
- 2. Setup nsrole rewriter
- 3. Restart the server
- 4. Search for entries
- 5. Setup filtered role with bad filter
- 6. Search for entries
- :expectedresults:
- 1. Operation should succeed
- 2. Operation should succeed
- 3. Operation should succeed
- 4. Operation should succeed
- 5. Operation should succeed
- 6. Operation should succeed
- """
- inst = topo.standalone
- entries = []
-
- def fin():
- inst.start()
- for entry in entries:
- entry.delete()
- request.addfinalizer(fin)
-
- # Setup filtered role
- roles = FilteredRoles(inst, f'ou=people,{DEFAULT_SUFFIX}')
- filter_ko = '(&((objectClass=top)(objectClass=nsPerson))'
- filter_ok = '(&(objectClass=top)(objectClass=nsPerson))'
- role_properties = {
- 'cn': 'TestFilteredRole',
- 'nsRoleFilter': filter_ok,
- 'description': 'Test good filter',
- }
- role = roles.create(properties=role_properties)
- entries.append(role)
-
- # Setup nsrole rewriter
- rewriters = Rewriters(inst)
- rewriter_properties = {
- "cn": "nsrole",
- "nsslapd-libpath": 'libroles-plugin',
- "nsslapd-filterrewriter": 'role_nsRole_filter_rewriter',
- }
- rewriter = rewriters.ensure_state(properties=rewriter_properties)
- entries.append(rewriter)
-
- # Restart thge instance
- inst.restart()
-
- # Search for entries
- entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn)
-
- # Set bad filter
- role_properties = {
- 'cn': 'TestFilteredRole',
- 'nsRoleFilter': filter_ko,
- 'description': 'Test bad filter',
- }
- role.ensure_state(properties=role_properties)
-
- # Search for entries
- entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn)
-
-
if __name__ == "__main__":
CURRENT_FILE = os.path.realpath(__file__)
pytest.main("-s -v %s" % CURRENT_FILE)
diff --git a/ldap/servers/plugins/roles/roles_cache.c b/ldap/servers/plugins/roles/roles_cache.c
index 3e1c5b429..05cabc3a3 100644
--- a/ldap/servers/plugins/roles/roles_cache.c
+++ b/ldap/servers/plugins/roles/roles_cache.c
@@ -1117,16 +1117,17 @@ roles_cache_create_object_from_entry(Slapi_Entry *role_entry, role_object **resu
rolescopeDN = slapi_entry_attr_get_charptr(role_entry, ROLE_SCOPE_DN);
if (rolescopeDN) {
- Slapi_DN *rolescopeSDN;
- Slapi_DN *top_rolescopeSDN, *top_this_roleSDN;
+ Slapi_DN *rolescopeSDN = NULL;
+ Slapi_DN *top_rolescopeSDN = NULL;
+ Slapi_DN *top_this_roleSDN = NULL;
/* Before accepting to use this scope, first check if it belongs to the same suffix */
rolescopeSDN = slapi_sdn_new_dn_byref(rolescopeDN);
- if ((strlen((char *)slapi_sdn_get_ndn(rolescopeSDN)) > 0) &&
+ if (rolescopeSDN && (strlen((char *)slapi_sdn_get_ndn(rolescopeSDN)) > 0) &&
(slapi_dn_syntax_check(NULL, (char *)slapi_sdn_get_ndn(rolescopeSDN), 1) == 0)) {
top_rolescopeSDN = roles_cache_get_top_suffix(rolescopeSDN);
top_this_roleSDN = roles_cache_get_top_suffix(this_role->dn);
- if (slapi_sdn_compare(top_rolescopeSDN, top_this_roleSDN) == 0) {
+ if (top_rolescopeSDN && top_this_roleSDN && slapi_sdn_compare(top_rolescopeSDN, top_this_roleSDN) == 0) {
/* rolescopeDN belongs to the same suffix as the role, we can use this scope */
this_role->rolescopedn = rolescopeSDN;
} else {
@@ -1148,6 +1149,7 @@ roles_cache_create_object_from_entry(Slapi_Entry *role_entry, role_object **resu
rolescopeDN);
slapi_sdn_free(&rolescopeSDN);
}
+ slapi_ch_free_string(&rolescopeDN);
}
/* Depending upon role type, pull out the remaining information we need */
--
2.49.0

View file

@ -0,0 +1,64 @@
From dbaf0ccfb54be40e2854e3979bb4460e26851b5a Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 28 Jul 2025 13:16:10 +0200
Subject: [PATCH] Issue 6901 - Update changelog trimming logging - fix tests
Description:
Update changelog_trimming_test for the new error message.
Fixes: https://github.com/389ds/389-ds-base/issues/6901
Reviewed by: @progier389, @aadhikar (Thanks!)
---
.../suites/replication/changelog_trimming_test.py | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/dirsrvtests/tests/suites/replication/changelog_trimming_test.py b/dirsrvtests/tests/suites/replication/changelog_trimming_test.py
index 2d70d328e..27d19e8fd 100644
--- a/dirsrvtests/tests/suites/replication/changelog_trimming_test.py
+++ b/dirsrvtests/tests/suites/replication/changelog_trimming_test.py
@@ -110,7 +110,7 @@ def test_max_age(topo, setup_max_age):
do_mods(supplier, 10)
time.sleep(1) # Trimming should not have occurred
- if supplier.searchErrorsLog("Trimmed") is True:
+ if supplier.searchErrorsLog("trimmed") is True:
log.fatal('Trimming event unexpectedly occurred')
assert False
@@ -120,12 +120,12 @@ def test_max_age(topo, setup_max_age):
cl.set_trim_interval('5')
time.sleep(3) # Trimming should not have occurred
- if supplier.searchErrorsLog("Trimmed") is True:
+ if supplier.searchErrorsLog("trimmed") is True:
log.fatal('Trimming event unexpectedly occurred')
assert False
time.sleep(3) # Trimming should have occurred
- if supplier.searchErrorsLog("Trimmed") is False:
+ if supplier.searchErrorsLog("trimmed") is False:
log.fatal('Trimming event did not occur')
assert False
@@ -159,7 +159,7 @@ def test_max_entries(topo, setup_max_entries):
do_mods(supplier, 10)
time.sleep(1) # Trimming should have occurred
- if supplier.searchErrorsLog("Trimmed") is True:
+ if supplier.searchErrorsLog("trimmed") is True:
log.fatal('Trimming event unexpectedly occurred')
assert False
@@ -169,7 +169,7 @@ def test_max_entries(topo, setup_max_entries):
cl.set_trim_interval('5')
time.sleep(6) # Trimming should have occurred
- if supplier.searchErrorsLog("Trimmed") is False:
+ if supplier.searchErrorsLog("trimmed") is False:
log.fatal('Trimming event did not occur')
assert False
--
2.49.0

View file

@ -0,0 +1,32 @@
From b34cec9c719c6dcb5f3ff24b9fd9e20eb233eadf Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Mon, 28 Jul 2025 13:18:26 +0200
Subject: [PATCH] Issue 6181 - RFE - Allow system to manage uid/gid at startup
Description:
Expand CapabilityBoundingSet to include CAP_FOWNER
Relates: https://github.com/389ds/389-ds-base/issues/6181
Relates: https://github.com/389ds/389-ds-base/issues/6906
Reviewed by: @progier389 (Thanks!)
---
wrappers/systemd.template.service.in | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/wrappers/systemd.template.service.in b/wrappers/systemd.template.service.in
index ada608c86..8d2b96c7e 100644
--- a/wrappers/systemd.template.service.in
+++ b/wrappers/systemd.template.service.in
@@ -29,7 +29,7 @@ MemoryAccounting=yes
# Allow non-root instances to bind to low ports.
AmbientCapabilities=CAP_NET_BIND_SERVICE
-CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETUID CAP_SETGID CAP_DAC_OVERRIDE CAP_CHOWN
+CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETUID CAP_SETGID CAP_DAC_OVERRIDE CAP_CHOWN CAP_FOWNER
PrivateTmp=on
# https://en.opensuse.org/openSUSE:Security_Features#Systemd_hardening_effort
--
2.49.0

View file

@ -0,0 +1,31 @@
From 403077fd337a6221e95f704b4fcd70fe09d1d7e3 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Tue, 29 Jul 2025 08:00:00 +0200
Subject: [PATCH] Issue 6468 - CLI - Fix default error log level
Description:
Default error log level is 16384
Relates: https://github.com/389ds/389-ds-base/issues/6468
Reviewed by: @droideck (Thanks!)
---
src/lib389/lib389/cli_conf/logging.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/lib389/lib389/cli_conf/logging.py b/src/lib389/lib389/cli_conf/logging.py
index 124556f1f..d9ae1ab16 100644
--- a/src/lib389/lib389/cli_conf/logging.py
+++ b/src/lib389/lib389/cli_conf/logging.py
@@ -44,7 +44,7 @@ ERROR_LEVELS = {
+ "methods used for a SASL bind"
},
"default": {
- "level": 6384,
+ "level": 16384,
"desc": "Default logging level"
},
"filter": {
--
2.49.0

View file

@ -0,0 +1,97 @@
From ec7c5a58c7decf94ba5011656c68597778f6059c Mon Sep 17 00:00:00 2001
From: James Chapman <jachapma@redhat.com>
Date: Fri, 1 Aug 2025 13:27:02 +0100
Subject: [PATCH] Issue 6768 - ns-slapd crashes when a referral is added
(#6780)
Bug description: When a paged result search is successfully run on a referred
suffix, we retrieve the search result set from the pblock and try to release
it. In this case the search result set is NULL, which triggers a SEGV during
the release.
Fix description: If the search result code is LDAP_REFERRAL, skip deletion of
the search result set. Added test case.
Fixes: https://github.com/389ds/389-ds-base/issues/6768
Reviewed by: @tbordaz, @progier389 (Thank you)
---
.../paged_results/paged_results_test.py | 46 +++++++++++++++++++
ldap/servers/slapd/opshared.c | 4 +-
2 files changed, 49 insertions(+), 1 deletion(-)
diff --git a/dirsrvtests/tests/suites/paged_results/paged_results_test.py b/dirsrvtests/tests/suites/paged_results/paged_results_test.py
index fca48db0f..1bb94b53a 100644
--- a/dirsrvtests/tests/suites/paged_results/paged_results_test.py
+++ b/dirsrvtests/tests/suites/paged_results/paged_results_test.py
@@ -1271,6 +1271,52 @@ def test_search_stress_abandon(create_40k_users, create_user):
paged_search(conn, create_40k_users.suffix, [req_ctrl], search_flt, searchreq_attrlist, abandon_rate=abandon_rate)
+def test_search_referral(topology_st):
+ """Test a paged search on a referred suffix doesnt crash the server.
+
+ :id: c788bdbf-965b-4f12-ac24-d4d695e2cce2
+
+ :setup: Standalone instance
+
+ :steps:
+ 1. Configure a default referral.
+ 2. Create a paged result search control.
+ 3. Paged result search on referral suffix (doesnt exist on the instance, triggering a referral).
+ 4. Check the server is still running.
+ 5. Remove referral.
+
+ :expectedresults:
+ 1. Referral sucessfully set.
+ 2. Control created.
+ 3. Search returns ldap.REFERRAL (10).
+ 4. Server still running.
+ 5. Referral removed.
+ """
+
+ page_size = 5
+ SEARCH_SUFFIX = "dc=referme,dc=com"
+ REFERRAL = "ldap://localhost.localdomain:389/o%3dnetscaperoot"
+
+ log.info('Configuring referral')
+ topology_st.standalone.config.set('nsslapd-referral', REFERRAL)
+ referral = topology_st.standalone.config.get_attr_val_utf8('nsslapd-referral')
+ assert (referral == REFERRAL)
+
+ log.info('Create paged result search control')
+ req_ctrl = SimplePagedResultsControl(True, size=page_size, cookie='')
+
+ log.info('Perform a paged result search on referred suffix, no chase')
+ with pytest.raises(ldap.REFERRAL):
+ topology_st.standalone.search_ext_s(SEARCH_SUFFIX, ldap.SCOPE_SUBTREE, serverctrls=[req_ctrl])
+
+ log.info('Confirm instance is still running')
+ assert (topology_st.standalone.status())
+
+ log.info('Remove referral')
+ topology_st.standalone.config.remove_all('nsslapd-referral')
+ referral = topology_st.standalone.config.get_attr_val_utf8('nsslapd-referral')
+ assert (referral == None)
+
if __name__ == '__main__':
# Run isolated
# -s for DEBUG mode
diff --git a/ldap/servers/slapd/opshared.c b/ldap/servers/slapd/opshared.c
index 545518748..a5cddfd23 100644
--- a/ldap/servers/slapd/opshared.c
+++ b/ldap/servers/slapd/opshared.c
@@ -910,7 +910,9 @@ op_shared_search(Slapi_PBlock *pb, int send_result)
/* Free the results if not "no_such_object" */
void *sr = NULL;
slapi_pblock_get(pb, SLAPI_SEARCH_RESULT_SET, &sr);
- be->be_search_results_release(&sr);
+ if (be->be_search_results_release != NULL) {
+ be->be_search_results_release(&sr);
+ }
}
pagedresults_set_search_result(pb_conn, operation, NULL, 1, pr_idx);
rc = pagedresults_set_current_be(pb_conn, NULL, pr_idx, 1);
--
2.49.0

View file

@ -0,0 +1,33 @@
From 8d0320144a3996c63e50a8e3792e8c1de3dcd76b Mon Sep 17 00:00:00 2001
From: Yaakov Selkowitz <yselkowi@redhat.com>
Date: Wed, 20 Aug 2025 17:43:30 -0400
Subject: [PATCH] Issue 6430 - Fix build with bundled libdb
Description:
The libbdb_ro change (#6431) added a `WITH_LIBBDB_RO` automake conditional
and a `db_bdb_srcdir` autoconf substitution which must also be defined when
building --with-bundle-libdb.
Related: https://github.com/389ds/389-ds-base/pull/6431
---
m4/bundle_libdb.m4 | 3 +++
1 file changed, 3 insertions(+)
diff --git a/m4/bundle_libdb.m4 b/m4/bundle_libdb.m4
index 3ae3beb49..a182378f1 100644
--- a/m4/bundle_libdb.m4
+++ b/m4/bundle_libdb.m4
@@ -27,7 +27,10 @@ else
AC_MSG_RESULT([libdb-${db_ver_maj}.${db_ver_min}-389ds.so])
fi
+db_bdb_srcdir="ldap/servers/slapd/back-ldbm/db-bdb"
+AM_CONDITIONAL([WITH_LIBBDB_RO],[false])
+AC_SUBST(db_bdb_srcdir)
AC_SUBST(db_inc)
AC_SUBST(db_lib)
AC_SUBST(db_libver)
--
2.50.1

View file

@ -0,0 +1,87 @@
From 08880b84bef3033b312e66fb83cf86c7a1f7b464 Mon Sep 17 00:00:00 2001
From: Viktor Ashirov <vashirov@redhat.com>
Date: Thu, 18 Sep 2025 19:41:41 +0200
Subject: [PATCH] Issue 6997 - Logic error in get_bdb_impl_status prevents
bdb2mdb execution (#6998)
Bug Description:
On F41 and F42 migration to MDB fails:
```
dsctl localhost dblib bdb2mdb
Berkeley Database library is not available. Maybe 389-ds-base-bdb rpm should be installed.
Error: Berkeley Database library is not available
```
This happens because `BDB_IMPL_STATUS.NONE` gets returned earlier than
we check for presence of the standard libdb provided by the system.
Fix Description:
Rewrite the logic to return `BDB_IMPL_STATUS.NONE` only when no working
BDB implementation is found.
Fixes: https://github.com/389ds/389-ds-base/issues/6997
Reviewed by: @progier389 (Thanks!)
---
.../tests/suites/clu/dsctl_dblib_test.py | 1 +
src/lib389/lib389/cli_ctl/dblib.py | 29 ++++++++++++-------
2 files changed, 19 insertions(+), 11 deletions(-)
diff --git a/dirsrvtests/tests/suites/clu/dsctl_dblib_test.py b/dirsrvtests/tests/suites/clu/dsctl_dblib_test.py
index 9fc9caa05..76759f0ac 100644
--- a/dirsrvtests/tests/suites/clu/dsctl_dblib_test.py
+++ b/dirsrvtests/tests/suites/clu/dsctl_dblib_test.py
@@ -16,6 +16,7 @@ from lib389.backend import DatabaseConfig
from lib389.cli_ctl.dblib import (
FakeArgs,
dblib_bdb2mdb,
+ dblib_mdb2bdb,
dblib_cleanup,
is_bdb_supported)
from lib389.idm.user import UserAccounts
diff --git a/src/lib389/lib389/cli_ctl/dblib.py b/src/lib389/lib389/cli_ctl/dblib.py
index d4005de82..d94288494 100644
--- a/src/lib389/lib389/cli_ctl/dblib.py
+++ b/src/lib389/lib389/cli_ctl/dblib.py
@@ -146,20 +146,27 @@ def get_bdb_impl_status():
bundledbdb_plugin = 'libback-bdb'
libdb = 'libdb-'
plgstrs = check_plugin_strings(backldbm, [bundledbdb_plugin, libdb])
- if has_robdb is True:
- # read-only bdb build
+ has_bundled_strings = plgstrs[bundledbdb_plugin] is True
+ has_standard_strings = plgstrs[libdb] is True
+
+ # Check read-only BDB
+ if has_robdb:
return BDB_IMPL_STATUS.READ_ONLY
- if plgstrs[bundledbdb_plugin] is True:
- # bundled bdb build
- if find_plugin_path(bundledbdb_plugin):
- return BDB_IMPL_STATUS.BUNDLED
- return BDB_IMPL_STATUS.NONE
- if plgstrs[libdb] is True:
- # standard bdb package build
+
+ # Check bundled BDB
+ if has_bundled_strings and find_plugin_path(bundledbdb_plugin):
+ return BDB_IMPL_STATUS.BUNDLED
+
+ # Check standard (provided by system) BDB
+ if has_standard_strings:
return BDB_IMPL_STATUS.STANDARD
- # Unable to find libback-ldbm plugin
- return BDB_IMPL_STATUS.UNKNOWN
+ # If bundled strings found but no working implementation
+ if has_bundled_strings:
+ return BDB_IMPL_STATUS.NONE
+
+ # Unable to find any BDB indicators in libback-ldbm plugin
+ return BDB_IMPL_STATUS.UNKNOWN
def is_bdb_supported(read_write=True):
bdbok = [BDB_IMPL_STATUS.BUNDLED, BDB_IMPL_STATUS.STANDARD]
--
2.49.0

View file

@ -10,7 +10,11 @@ ExcludeArch: i686
%global __provides_exclude ^libjemalloc\\.so.*$
%endif
%bcond bundle_libdb %{defined rhel}
%bcond bundle_libdb 0
%if 0%{?rhel} >= 10
%bcond bundle_libdb 1
%endif
%if %{with bundle_libdb}
%global libdb_version 5.3
%global libdb_base_version db-%{libdb_version}.28
@ -24,6 +28,11 @@ ExcludeArch: i686
%endif
%endif
%bcond libbdb_ro 0
%if 0%{?fedora} >= 43
%bcond libbdb_ro 1
%endif
# This is used in certain builds to help us know if it has extra features.
%global variant base
# This enables a sanitized build.
@ -66,103 +75,96 @@ ExcludeArch: i686
Summary: 389 Directory Server (%{variant})
Name: 389-ds-base
Version: 3.1.1
Version: 3.1.3
Release: %{autorelease -n %{?with_asan:-e asan}}%{?dist}
License: GPL-3.0-or-later AND (0BSD OR Apache-2.0 OR MIT) AND (Apache-2.0 OR Apache-2.0 WITH LLVM-exception OR MIT) AND (Apache-2.0 OR BSL-1.0) AND (Apache-2.0 OR MIT OR Zlib) AND (Apache-2.0 OR MIT) AND (CC-BY-4.0 AND MIT) AND (MIT OR Apache-2.0) AND Unicode-DFS-2016 AND (MIT OR CC0-1.0) AND (MIT OR Unlicense) AND 0BSD AND Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MIT AND ISC AND MPL-2.0 AND PSF-2.0
License: GPL-3.0-or-later WITH GPL-3.0-389-ds-base-exception AND (0BSD OR Apache-2.0 OR MIT) AND (Apache-2.0 OR Apache-2.0 WITH LLVM-exception OR MIT) AND (Apache-2.0 OR BSL-1.0) AND (Apache-2.0 OR LGPL-2.1-or-later OR MIT) AND (Apache-2.0 OR MIT OR Zlib) AND (Apache-2.0 OR MIT) AND (CC-BY-4.0 AND MIT) AND (MIT OR Apache-2.0) AND Unicode-3.0 AND (MIT OR CC0-1.0) AND (MIT OR Unlicense) AND 0BSD AND Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MIT AND ISC AND MPL-2.0 AND PSF-2.0 AND Zlib
URL: https://www.port389.org
Obsoletes: %{name}-legacy-tools < 1.4.4.6
Obsoletes: %{name}-legacy-tools-debuginfo < 1.4.4.6
Provides: ldif2ldbm >= 0
##### Bundled cargo crates list - START #####
Provides: bundled(crate(addr2line)) = 0.22.0
Provides: bundled(crate(adler)) = 1.0.2
Provides: bundled(crate(ahash)) = 0.7.8
Provides: bundled(crate(addr2line)) = 0.24.2
Provides: bundled(crate(adler2)) = 2.0.1
Provides: bundled(crate(allocator-api2)) = 0.2.21
Provides: bundled(crate(atty)) = 0.2.14
Provides: bundled(crate(autocfg)) = 1.3.0
Provides: bundled(crate(backtrace)) = 0.3.73
Provides: bundled(crate(autocfg)) = 1.5.0
Provides: bundled(crate(backtrace)) = 0.3.75
Provides: bundled(crate(base64)) = 0.13.1
Provides: bundled(crate(bitflags)) = 2.6.0
Provides: bundled(crate(bitflags)) = 2.9.1
Provides: bundled(crate(byteorder)) = 1.5.0
Provides: bundled(crate(cbindgen)) = 0.26.0
Provides: bundled(crate(cc)) = 1.1.7
Provides: bundled(crate(cfg-if)) = 1.0.0
Provides: bundled(crate(cc)) = 1.2.27
Provides: bundled(crate(cfg-if)) = 1.0.1
Provides: bundled(crate(clap)) = 3.2.25
Provides: bundled(crate(clap_lex)) = 0.2.4
Provides: bundled(crate(concread)) = 0.2.21
Provides: bundled(crate(crossbeam)) = 0.8.4
Provides: bundled(crate(crossbeam-channel)) = 0.5.13
Provides: bundled(crate(crossbeam-deque)) = 0.8.5
Provides: bundled(crate(concread)) = 0.5.6
Provides: bundled(crate(crossbeam-epoch)) = 0.9.18
Provides: bundled(crate(crossbeam-queue)) = 0.3.11
Provides: bundled(crate(crossbeam-utils)) = 0.8.20
Provides: bundled(crate(errno)) = 0.3.9
Provides: bundled(crate(fastrand)) = 2.1.0
Provides: bundled(crate(crossbeam-queue)) = 0.3.12
Provides: bundled(crate(crossbeam-utils)) = 0.8.21
Provides: bundled(crate(equivalent)) = 1.0.2
Provides: bundled(crate(errno)) = 0.3.12
Provides: bundled(crate(fastrand)) = 2.3.0
Provides: bundled(crate(fernet)) = 0.1.4
Provides: bundled(crate(foldhash)) = 0.1.5
Provides: bundled(crate(foreign-types)) = 0.3.2
Provides: bundled(crate(foreign-types-shared)) = 0.1.1
Provides: bundled(crate(getrandom)) = 0.2.15
Provides: bundled(crate(gimli)) = 0.29.0
Provides: bundled(crate(hashbrown)) = 0.12.3
Provides: bundled(crate(getrandom)) = 0.3.3
Provides: bundled(crate(gimli)) = 0.31.1
Provides: bundled(crate(hashbrown)) = 0.15.4
Provides: bundled(crate(heck)) = 0.4.1
Provides: bundled(crate(hermit-abi)) = 0.1.19
Provides: bundled(crate(indexmap)) = 1.9.3
Provides: bundled(crate(instant)) = 0.1.13
Provides: bundled(crate(itoa)) = 1.0.11
Provides: bundled(crate(jobserver)) = 0.1.32
Provides: bundled(crate(libc)) = 0.2.155
Provides: bundled(crate(linux-raw-sys)) = 0.4.14
Provides: bundled(crate(lock_api)) = 0.4.12
Provides: bundled(crate(log)) = 0.4.22
Provides: bundled(crate(lru)) = 0.7.8
Provides: bundled(crate(memchr)) = 2.7.4
Provides: bundled(crate(miniz_oxide)) = 0.7.4
Provides: bundled(crate(object)) = 0.36.2
Provides: bundled(crate(once_cell)) = 1.19.0
Provides: bundled(crate(openssl)) = 0.10.66
Provides: bundled(crate(itoa)) = 1.0.15
Provides: bundled(crate(jobserver)) = 0.1.33
Provides: bundled(crate(libc)) = 0.2.174
Provides: bundled(crate(linux-raw-sys)) = 0.9.4
Provides: bundled(crate(log)) = 0.4.27
Provides: bundled(crate(lru)) = 0.13.0
Provides: bundled(crate(memchr)) = 2.7.5
Provides: bundled(crate(miniz_oxide)) = 0.8.9
Provides: bundled(crate(object)) = 0.36.7
Provides: bundled(crate(once_cell)) = 1.21.3
Provides: bundled(crate(openssl)) = 0.10.73
Provides: bundled(crate(openssl-macros)) = 0.1.1
Provides: bundled(crate(openssl-sys)) = 0.9.103
Provides: bundled(crate(openssl-sys)) = 0.9.109
Provides: bundled(crate(os_str_bytes)) = 6.6.1
Provides: bundled(crate(parking_lot)) = 0.11.2
Provides: bundled(crate(parking_lot_core)) = 0.8.6
Provides: bundled(crate(paste)) = 0.1.18
Provides: bundled(crate(paste-impl)) = 0.1.18
Provides: bundled(crate(pin-project-lite)) = 0.2.14
Provides: bundled(crate(pkg-config)) = 0.3.30
Provides: bundled(crate(ppv-lite86)) = 0.2.18
Provides: bundled(crate(pin-project-lite)) = 0.2.16
Provides: bundled(crate(pkg-config)) = 0.3.32
Provides: bundled(crate(proc-macro-hack)) = 0.5.20+deprecated
Provides: bundled(crate(proc-macro2)) = 1.0.86
Provides: bundled(crate(quote)) = 1.0.36
Provides: bundled(crate(rand)) = 0.8.5
Provides: bundled(crate(rand_chacha)) = 0.3.1
Provides: bundled(crate(rand_core)) = 0.6.4
Provides: bundled(crate(redox_syscall)) = 0.2.16
Provides: bundled(crate(rustc-demangle)) = 0.1.24
Provides: bundled(crate(rustix)) = 0.38.34
Provides: bundled(crate(ryu)) = 1.0.18
Provides: bundled(crate(scopeguard)) = 1.2.0
Provides: bundled(crate(serde)) = 1.0.204
Provides: bundled(crate(serde_derive)) = 1.0.204
Provides: bundled(crate(serde_json)) = 1.0.121
Provides: bundled(crate(smallvec)) = 1.13.2
Provides: bundled(crate(proc-macro2)) = 1.0.95
Provides: bundled(crate(quote)) = 1.0.40
Provides: bundled(crate(r-efi)) = 5.3.0
Provides: bundled(crate(rustc-demangle)) = 0.1.25
Provides: bundled(crate(rustix)) = 1.0.7
Provides: bundled(crate(ryu)) = 1.0.20
Provides: bundled(crate(serde)) = 1.0.219
Provides: bundled(crate(serde_derive)) = 1.0.219
Provides: bundled(crate(serde_json)) = 1.0.140
Provides: bundled(crate(shlex)) = 1.3.0
Provides: bundled(crate(smallvec)) = 1.15.1
Provides: bundled(crate(sptr)) = 0.3.2
Provides: bundled(crate(strsim)) = 0.10.0
Provides: bundled(crate(syn)) = 2.0.72
Provides: bundled(crate(tempfile)) = 3.10.1
Provides: bundled(crate(syn)) = 2.0.103
Provides: bundled(crate(tempfile)) = 3.20.0
Provides: bundled(crate(termcolor)) = 1.4.1
Provides: bundled(crate(textwrap)) = 0.16.1
Provides: bundled(crate(tokio)) = 1.39.2
Provides: bundled(crate(tokio-macros)) = 2.4.0
Provides: bundled(crate(textwrap)) = 0.16.2
Provides: bundled(crate(tokio)) = 1.45.1
Provides: bundled(crate(toml)) = 0.5.11
Provides: bundled(crate(unicode-ident)) = 1.0.12
Provides: bundled(crate(tracing)) = 0.1.41
Provides: bundled(crate(tracing-attributes)) = 0.1.30
Provides: bundled(crate(tracing-core)) = 0.1.34
Provides: bundled(crate(unicode-ident)) = 1.0.18
Provides: bundled(crate(uuid)) = 0.8.2
Provides: bundled(crate(vcpkg)) = 0.2.15
Provides: bundled(crate(version_check)) = 0.9.5
Provides: bundled(crate(wasi)) = 0.11.0+wasi_snapshot_preview1
Provides: bundled(crate(wasi)) = 0.14.2+wasi_0.2.4
Provides: bundled(crate(winapi)) = 0.3.9
Provides: bundled(crate(winapi-i686-pc-windows-gnu)) = 0.4.0
Provides: bundled(crate(winapi-util)) = 0.1.8
Provides: bundled(crate(winapi-util)) = 0.1.9
Provides: bundled(crate(winapi-x86_64-pc-windows-gnu)) = 0.4.0
Provides: bundled(crate(windows-sys)) = 0.52.0
Provides: bundled(crate(windows-sys)) = 0.59.0
Provides: bundled(crate(windows-targets)) = 0.52.6
Provides: bundled(crate(windows_aarch64_gnullvm)) = 0.52.6
Provides: bundled(crate(windows_aarch64_msvc)) = 0.52.6
@ -172,48 +174,51 @@ Provides: bundled(crate(windows_i686_msvc)) = 0.52.6
Provides: bundled(crate(windows_x86_64_gnu)) = 0.52.6
Provides: bundled(crate(windows_x86_64_gnullvm)) = 0.52.6
Provides: bundled(crate(windows_x86_64_msvc)) = 0.52.6
Provides: bundled(crate(zerocopy)) = 0.6.6
Provides: bundled(crate(zerocopy-derive)) = 0.6.6
Provides: bundled(crate(wit-bindgen-rt)) = 0.39.0
Provides: bundled(crate(zeroize)) = 1.8.1
Provides: bundled(crate(zeroize_derive)) = 1.4.2
Provides: bundled(npm(@aashutoshrathi/word-wrap)) = 1.2.6
Provides: bundled(npm(@eslint-community/eslint-utils)) = 4.4.0
Provides: bundled(npm(@eslint-community/regexpp)) = 4.5.1
Provides: bundled(npm(@eslint/eslintrc)) = 2.0.3
Provides: bundled(npm(@eslint/js)) = 8.42.0
Provides: bundled(npm(@eslint-community/eslint-utils)) = 4.4.1
Provides: bundled(npm(@eslint-community/regexpp)) = 4.12.1
Provides: bundled(npm(@eslint/eslintrc)) = 2.1.4
Provides: bundled(npm(@eslint/js)) = 8.57.1
Provides: bundled(npm(@fortawesome/fontawesome-common-types)) = 0.2.36
Provides: bundled(npm(@fortawesome/fontawesome-svg-core)) = 1.2.36
Provides: bundled(npm(@fortawesome/free-solid-svg-icons)) = 5.15.4
Provides: bundled(npm(@fortawesome/react-fontawesome)) = 0.1.19
Provides: bundled(npm(@humanwhocodes/config-array)) = 0.11.10
Provides: bundled(npm(@humanwhocodes/config-array)) = 0.13.0
Provides: bundled(npm(@humanwhocodes/module-importer)) = 1.0.1
Provides: bundled(npm(@humanwhocodes/object-schema)) = 1.2.1
Provides: bundled(npm(@humanwhocodes/object-schema)) = 2.0.3
Provides: bundled(npm(@nodelib/fs.scandir)) = 2.1.5
Provides: bundled(npm(@nodelib/fs.stat)) = 2.0.5
Provides: bundled(npm(@nodelib/fs.walk)) = 1.2.8
Provides: bundled(npm(@patternfly/patternfly)) = 4.224.2
Provides: bundled(npm(@patternfly/react-charts)) = 6.94.19
Provides: bundled(npm(@patternfly/react-core)) = 4.276.8
Provides: bundled(npm(@patternfly/react-icons)) = 4.93.6
Provides: bundled(npm(@patternfly/react-styles)) = 4.92.6
Provides: bundled(npm(@patternfly/react-table)) = 4.113.0
Provides: bundled(npm(@patternfly/react-tokens)) = 4.94.6
Provides: bundled(npm(@types/d3-array)) = 3.0.5
Provides: bundled(npm(@types/d3-color)) = 3.1.0
Provides: bundled(npm(@types/d3-ease)) = 3.0.0
Provides: bundled(npm(@types/d3-interpolate)) = 3.0.1
Provides: bundled(npm(@types/d3-path)) = 3.0.0
Provides: bundled(npm(@types/d3-scale)) = 4.0.3
Provides: bundled(npm(@types/d3-shape)) = 3.1.1
Provides: bundled(npm(@types/d3-time)) = 3.0.0
Provides: bundled(npm(@types/d3-timer)) = 3.0.0
Provides: bundled(npm(acorn)) = 8.8.2
Provides: bundled(npm(@patternfly/patternfly)) = 5.4.1
Provides: bundled(npm(@patternfly/react-charts)) = 7.4.3
Provides: bundled(npm(@patternfly/react-core)) = 5.4.1
Provides: bundled(npm(@patternfly/react-icons)) = 5.4.0
Provides: bundled(npm(@patternfly/react-log-viewer)) = 5.3.0
Provides: bundled(npm(@patternfly/react-styles)) = 5.4.0
Provides: bundled(npm(@patternfly/react-table)) = 5.4.1
Provides: bundled(npm(@patternfly/react-tokens)) = 5.4.0
Provides: bundled(npm(@types/d3-array)) = 3.2.1
Provides: bundled(npm(@types/d3-color)) = 3.1.3
Provides: bundled(npm(@types/d3-ease)) = 3.0.2
Provides: bundled(npm(@types/d3-interpolate)) = 3.0.4
Provides: bundled(npm(@types/d3-path)) = 3.1.0
Provides: bundled(npm(@types/d3-scale)) = 4.0.8
Provides: bundled(npm(@types/d3-shape)) = 3.1.6
Provides: bundled(npm(@types/d3-time)) = 3.0.3
Provides: bundled(npm(@types/d3-timer)) = 3.0.2
Provides: bundled(npm(@ungap/structured-clone)) = 1.2.0
Provides: bundled(npm(@xterm/addon-canvas)) = 0.7.0
Provides: bundled(npm(@xterm/xterm)) = 5.5.0
Provides: bundled(npm(acorn)) = 8.14.0
Provides: bundled(npm(acorn-jsx)) = 5.3.2
Provides: bundled(npm(ajv)) = 6.12.6
Provides: bundled(npm(ansi-regex)) = 5.0.1
Provides: bundled(npm(ansi-styles)) = 4.3.0
Provides: bundled(npm(argparse)) = 2.0.1
Provides: bundled(npm(attr-accept)) = 1.1.3
Provides: bundled(npm(attr-accept)) = 2.2.4
Provides: bundled(npm(autolinker)) = 3.16.2
Provides: bundled(npm(balanced-match)) = 1.0.2
Provides: bundled(npm(brace-expansion)) = 1.1.11
Provides: bundled(npm(callsites)) = 3.1.0
@ -221,8 +226,8 @@ Provides: bundled(npm(chalk)) = 4.1.2
Provides: bundled(npm(color-convert)) = 2.0.1
Provides: bundled(npm(color-name)) = 1.1.4
Provides: bundled(npm(concat-map)) = 0.0.1
Provides: bundled(npm(core-js)) = 2.6.12
Provides: bundled(npm(cross-spawn)) = 7.0.3
Provides: bundled(npm(core-util-is)) = 1.0.3
Provides: bundled(npm(cross-spawn)) = 7.0.6
Provides: bundled(npm(d3-array)) = 3.2.4
Provides: bundled(npm(d3-color)) = 3.1.0
Provides: bundled(npm(d3-ease)) = 3.0.1
@ -234,42 +239,43 @@ Provides: bundled(npm(d3-shape)) = 3.2.0
Provides: bundled(npm(d3-time)) = 3.1.0
Provides: bundled(npm(d3-time-format)) = 4.1.0
Provides: bundled(npm(d3-timer)) = 3.0.1
Provides: bundled(npm(debug)) = 4.3.4
Provides: bundled(npm(debug)) = 4.3.7
Provides: bundled(npm(deep-is)) = 0.1.4
Provides: bundled(npm(delaunator)) = 4.0.1
Provides: bundled(npm(delaunay-find)) = 0.0.6
Provides: bundled(npm(dequal)) = 2.0.3
Provides: bundled(npm(doctrine)) = 3.0.0
Provides: bundled(npm(encoding)) = 0.1.13
Provides: bundled(npm(escape-string-regexp)) = 4.0.0
Provides: bundled(npm(eslint)) = 8.42.0
Provides: bundled(npm(eslint-plugin-react-hooks)) = 4.6.0
Provides: bundled(npm(eslint-scope)) = 7.2.0
Provides: bundled(npm(eslint-visitor-keys)) = 3.4.1
Provides: bundled(npm(espree)) = 9.5.2
Provides: bundled(npm(esquery)) = 1.5.0
Provides: bundled(npm(eslint)) = 8.57.1
Provides: bundled(npm(eslint-plugin-react-hooks)) = 4.6.2
Provides: bundled(npm(eslint-scope)) = 7.2.2
Provides: bundled(npm(eslint-visitor-keys)) = 3.4.3
Provides: bundled(npm(espree)) = 9.6.1
Provides: bundled(npm(esquery)) = 1.6.0
Provides: bundled(npm(esrecurse)) = 4.3.0
Provides: bundled(npm(estraverse)) = 5.3.0
Provides: bundled(npm(esutils)) = 2.0.3
Provides: bundled(npm(fast-deep-equal)) = 3.1.3
Provides: bundled(npm(fast-json-stable-stringify)) = 2.1.0
Provides: bundled(npm(fast-levenshtein)) = 2.0.6
Provides: bundled(npm(fastq)) = 1.15.0
Provides: bundled(npm(fastq)) = 1.17.1
Provides: bundled(npm(file-entry-cache)) = 6.0.1
Provides: bundled(npm(file-selector)) = 0.1.19
Provides: bundled(npm(file-selector)) = 2.1.0
Provides: bundled(npm(find-up)) = 5.0.0
Provides: bundled(npm(flat-cache)) = 3.0.4
Provides: bundled(npm(flatted)) = 3.2.7
Provides: bundled(npm(focus-trap)) = 6.9.2
Provides: bundled(npm(flat-cache)) = 3.2.0
Provides: bundled(npm(flatted)) = 3.3.1
Provides: bundled(npm(focus-trap)) = 7.5.4
Provides: bundled(npm(fs.realpath)) = 1.0.0
Provides: bundled(npm(gettext-parser)) = 2.0.0
Provides: bundled(npm(gettext-parser)) = 2.1.0
Provides: bundled(npm(glob)) = 7.2.3
Provides: bundled(npm(glob-parent)) = 6.0.2
Provides: bundled(npm(globals)) = 13.20.0
Provides: bundled(npm(globals)) = 13.24.0
Provides: bundled(npm(graphemer)) = 1.4.0
Provides: bundled(npm(has-flag)) = 4.0.0
Provides: bundled(npm(hoist-non-react-statics)) = 3.3.2
Provides: bundled(npm(iconv-lite)) = 0.6.3
Provides: bundled(npm(ignore)) = 5.2.4
Provides: bundled(npm(ignore)) = 5.3.2
Provides: bundled(npm(import-fresh)) = 3.3.0
Provides: bundled(npm(imurmurhash)) = 0.1.4
Provides: bundled(npm(inflight)) = 1.0.6
@ -278,82 +284,95 @@ Provides: bundled(npm(internmap)) = 2.0.3
Provides: bundled(npm(is-extglob)) = 2.1.1
Provides: bundled(npm(is-glob)) = 4.0.3
Provides: bundled(npm(is-path-inside)) = 3.0.3
Provides: bundled(npm(isarray)) = 1.0.0
Provides: bundled(npm(isexe)) = 2.0.0
Provides: bundled(npm(js-sha1)) = 0.7.0
Provides: bundled(npm(js-sha256)) = 0.11.0
Provides: bundled(npm(js-tokens)) = 4.0.0
Provides: bundled(npm(js-yaml)) = 4.1.0
Provides: bundled(npm(json-buffer)) = 3.0.1
Provides: bundled(npm(json-schema-traverse)) = 0.4.1
Provides: bundled(npm(json-stable-stringify-without-jsonify)) = 1.0.1
Provides: bundled(npm(json-stringify-safe)) = 5.0.1
Provides: bundled(npm(keyv)) = 4.5.4
Provides: bundled(npm(levn)) = 0.4.1
Provides: bundled(npm(locate-path)) = 6.0.0
Provides: bundled(npm(lodash)) = 4.17.21
Provides: bundled(npm(lodash.merge)) = 4.6.2
Provides: bundled(npm(loose-envify)) = 1.4.0
Provides: bundled(npm(memoize-one)) = 5.2.1
Provides: bundled(npm(minimatch)) = 3.1.2
Provides: bundled(npm(ms)) = 2.1.2
Provides: bundled(npm(ms)) = 2.1.3
Provides: bundled(npm(natural-compare)) = 1.4.0
Provides: bundled(npm(object-assign)) = 4.1.1
Provides: bundled(npm(once)) = 1.4.0
Provides: bundled(npm(optionator)) = 0.9.3
Provides: bundled(npm(optionator)) = 0.9.4
Provides: bundled(npm(p-limit)) = 3.1.0
Provides: bundled(npm(p-locate)) = 5.0.0
Provides: bundled(npm(parent-module)) = 1.0.1
Provides: bundled(npm(path-exists)) = 4.0.0
Provides: bundled(npm(path-is-absolute)) = 1.0.1
Provides: bundled(npm(path-key)) = 3.1.1
Provides: bundled(npm(popper.js)) = 1.16.1
Provides: bundled(npm(prelude-ls)) = 1.2.1
Provides: bundled(npm(prettier)) = 3.3.3
Provides: bundled(npm(process-nextick-args)) = 2.0.1
Provides: bundled(npm(prop-types)) = 15.8.1
Provides: bundled(npm(prop-types-extra)) = 1.1.1
Provides: bundled(npm(punycode)) = 2.3.0
Provides: bundled(npm(punycode)) = 2.3.1
Provides: bundled(npm(queue-microtask)) = 1.2.3
Provides: bundled(npm(react)) = 17.0.2
Provides: bundled(npm(react-dom)) = 17.0.2
Provides: bundled(npm(react-dropzone)) = 9.0.0
Provides: bundled(npm(react)) = 18.3.1
Provides: bundled(npm(react-dom)) = 18.3.1
Provides: bundled(npm(react-dropzone)) = 14.3.5
Provides: bundled(npm(react-fast-compare)) = 3.2.2
Provides: bundled(npm(react-is)) = 16.13.1
Provides: bundled(npm(readable-stream)) = 2.3.8
Provides: bundled(npm(remarkable)) = 2.0.1
Provides: bundled(npm(resolve-from)) = 4.0.0
Provides: bundled(npm(reusify)) = 1.0.4
Provides: bundled(npm(rimraf)) = 3.0.2
Provides: bundled(npm(run-parallel)) = 1.2.0
Provides: bundled(npm(safe-buffer)) = 5.2.1
Provides: bundled(npm(safer-buffer)) = 2.1.2
Provides: bundled(npm(scheduler)) = 0.20.2
Provides: bundled(npm(scheduler)) = 0.23.2
Provides: bundled(npm(shebang-command)) = 2.0.0
Provides: bundled(npm(shebang-regex)) = 3.0.0
Provides: bundled(npm(sprintf-js)) = 1.0.3
Provides: bundled(npm(string_decoder)) = 1.1.1
Provides: bundled(npm(strip-ansi)) = 6.0.1
Provides: bundled(npm(strip-json-comments)) = 3.1.1
Provides: bundled(npm(supports-color)) = 7.2.0
Provides: bundled(npm(tabbable)) = 5.3.3
Provides: bundled(npm(tabbable)) = 6.2.0
Provides: bundled(npm(text-table)) = 0.2.0
Provides: bundled(npm(tippy.js)) = 5.1.2
Provides: bundled(npm(tslib)) = 2.5.3
Provides: bundled(npm(throttle-debounce)) = 5.0.2
Provides: bundled(npm(tslib)) = 2.8.1
Provides: bundled(npm(type-check)) = 0.4.0
Provides: bundled(npm(type-fest)) = 0.20.2
Provides: bundled(npm(uri-js)) = 4.4.1
Provides: bundled(npm(victory-area)) = 36.6.10
Provides: bundled(npm(victory-axis)) = 36.6.10
Provides: bundled(npm(victory-bar)) = 36.6.10
Provides: bundled(npm(victory-brush-container)) = 36.6.10
Provides: bundled(npm(victory-chart)) = 36.6.10
Provides: bundled(npm(victory-core)) = 36.6.10
Provides: bundled(npm(victory-create-container)) = 36.6.10
Provides: bundled(npm(victory-cursor-container)) = 36.6.10
Provides: bundled(npm(victory-group)) = 36.6.10
Provides: bundled(npm(victory-legend)) = 36.6.10
Provides: bundled(npm(victory-line)) = 36.6.10
Provides: bundled(npm(victory-pie)) = 36.6.10
Provides: bundled(npm(victory-polar-axis)) = 36.6.10
Provides: bundled(npm(victory-scatter)) = 36.6.10
Provides: bundled(npm(victory-selection-container)) = 36.6.10
Provides: bundled(npm(victory-shared-events)) = 36.6.10
Provides: bundled(npm(victory-stack)) = 36.6.10
Provides: bundled(npm(victory-tooltip)) = 36.6.10
Provides: bundled(npm(victory-vendor)) = 36.6.10
Provides: bundled(npm(victory-voronoi-container)) = 36.6.10
Provides: bundled(npm(victory-zoom-container)) = 36.6.10
Provides: bundled(npm(warning)) = 4.0.3
Provides: bundled(npm(util-deprecate)) = 1.0.2
Provides: bundled(npm(uuid)) = 10.0.0
Provides: bundled(npm(victory-area)) = 37.3.1
Provides: bundled(npm(victory-axis)) = 37.3.1
Provides: bundled(npm(victory-bar)) = 37.3.1
Provides: bundled(npm(victory-box-plot)) = 37.3.1
Provides: bundled(npm(victory-brush-container)) = 37.3.1
Provides: bundled(npm(victory-chart)) = 37.3.1
Provides: bundled(npm(victory-core)) = 37.3.1
Provides: bundled(npm(victory-create-container)) = 37.3.1
Provides: bundled(npm(victory-cursor-container)) = 37.3.1
Provides: bundled(npm(victory-group)) = 37.3.1
Provides: bundled(npm(victory-legend)) = 37.3.1
Provides: bundled(npm(victory-line)) = 37.3.1
Provides: bundled(npm(victory-pie)) = 37.3.1
Provides: bundled(npm(victory-polar-axis)) = 37.3.1
Provides: bundled(npm(victory-scatter)) = 37.3.1
Provides: bundled(npm(victory-selection-container)) = 37.3.1
Provides: bundled(npm(victory-shared-events)) = 37.3.1
Provides: bundled(npm(victory-stack)) = 37.3.1
Provides: bundled(npm(victory-tooltip)) = 37.3.1
Provides: bundled(npm(victory-vendor)) = 37.3.1
Provides: bundled(npm(victory-voronoi-container)) = 37.3.1
Provides: bundled(npm(victory-zoom-container)) = 37.3.1
Provides: bundled(npm(which)) = 2.0.2
Provides: bundled(npm(word-wrap)) = 1.2.5
Provides: bundled(npm(wrappy)) = 1.0.2
Provides: bundled(npm(yocto-queue)) = 0.1.0
##### Bundled cargo crates list - END #####
@ -370,6 +389,7 @@ BuildRequires: libicu-devel
BuildRequires: pcre2-devel
BuildRequires: cracklib-devel
BuildRequires: json-c-devel
BuildRequires: libxcrypt-devel
%if %{with clang}
BuildRequires: libatomic
BuildRequires: clang
@ -388,9 +408,11 @@ BuildRequires: libtsan
BuildRequires: libubsan
%endif
%endif
%if %{without libbdb_ro}
%if %{without bundle_libdb}
BuildRequires: libdb-devel
%endif
%endif
# The following are needed to build the snmp ldap-agent
BuildRequires: net-snmp-devel
@ -417,18 +439,7 @@ BuildRequires: doxygen
# For tests!
BuildRequires: libcmocka-devel
# For lib389 and related components.
BuildRequires: python%{python3_pkgversion}
BuildRequires: python%{python3_pkgversion}-devel
BuildRequires: python%{python3_pkgversion}-setuptools
BuildRequires: python%{python3_pkgversion}-ldap
BuildRequires: python%{python3_pkgversion}-pyasn1
BuildRequires: python%{python3_pkgversion}-pyasn1-modules
BuildRequires: python%{python3_pkgversion}-dateutil
BuildRequires: python%{python3_pkgversion}-argcomplete
BuildRequires: python%{python3_pkgversion}-argparse-manpage
BuildRequires: python%{python3_pkgversion}-policycoreutils
BuildRequires: python%{python3_pkgversion}-libselinux
BuildRequires: python%{python3_pkgversion}-cryptography
# For cockpit
%if %{with cockpit}
@ -437,6 +448,9 @@ BuildRequires: npm
BuildRequires: nodejs
%endif
# For autosetup -S git
BuildRequires: git
Requires: %{name}-libs = %{version}-%{release}
Requires: python%{python3_pkgversion}-lib389 = %{version}-%{release}
@ -457,14 +471,20 @@ Requires: cyrus-sasl-md5
# This is optionally supported by us, as we use it in our tests
Requires: cyrus-sasl-plain
# this is needed for backldbm
%if %{with libbdb_ro}
Requires: %{name}-robdb-libs = %{version}-%{release}
%else
%if %{without bundle_libdb}
Requires: libdb
%endif
%endif
Requires: lmdb-libs
# Needed by logconv.pl
%if %{without libbdb_ro}
%if %{without bundle_libdb}
Requires: perl-DB_File
%endif
%endif
Requires: perl-Archive-Tar
%if 0%{?fedora} >= 33 || 0%{?rhel} >= 9
Requires: perl-debugger
@ -475,6 +495,8 @@ Requires: cracklib-dicts
Requires: json-c
# Log compression
Requires: zlib-devel
# logconv.py, MIME type
Requires: python3-file-magic
# Picks up our systemd deps.
%{?systemd_requires}
@ -488,6 +510,38 @@ Source4: 389-ds-base.sysusers
Source5: https://fedorapeople.org/groups/389ds/libdb-5.3.28-59.tar.bz2
%endif
Patch: 0001-Issue-6782-Improve-paged-result-locking.patch
Patch: 0002-Issue-6822-Backend-creation-cleanup-and-Database-UI-.patch
Patch: 0003-Issue-6753-Add-add_exclude_subtree-and-remove_exclud.patch
Patch: 0004-Issue-6857-uiduniq-allow-specifying-match-rules-in-t.patch
Patch: 0005-Issue-6756-CLI-UI-Properly-handle-disabled-NDN-cache.patch
Patch: 0006-Issue-6854-Refactor-for-improved-data-management-685.patch
Patch: 0007-Issue-6850-AddressSanitizer-memory-leak-in-mdb_init.patch
Patch: 0008-Issue-6848-AddressSanitizer-leak-in-do_search.patch
Patch: 0009-Issue-6865-AddressSanitizer-leak-in-agmt_update_init.patch
Patch: 0010-Issue-6868-UI-schema-attribute-table-expansion-break.patch
Patch: 0011-Issue-6859-str2filter-is-not-fully-applying-matching.patch
Patch: 0012-Issue-6872-compressed-log-rotation-creates-files-wit.patch
Patch: 0013-Issue-6888-Missing-access-JSON-logging-for-TLS-Clien.patch
Patch: 0014-Issue-6772-dsconf-Replicas-with-the-consumer-role-al.patch
Patch: 0015-Issue-6893-Log-user-that-is-updated-during-password-.patch
Patch: 0016-Issue-6901-Update-changelog-trimming-logging.patch
Patch: 0017-Issue-6430-implement-read-only-bdb-6431.patch
Patch: 0018-Issue-6663-Fix-NULL-subsystem-crash-in-JSON-error-lo.patch
Patch: 0019-Issue-6895-Crash-if-repl-keep-alive-entry-can-not-be.patch
Patch: 0020-Issue-6884-Mask-password-hashes-in-audit-logs-6885.patch
Patch: 0021-Issue-6778-Memory-leak-in-roles_cache_create_object_.patch
Patch: 0022-Issue-6901-Update-changelog-trimming-logging-fix-tes.patch
Patch: 0023-Issue-6181-RFE-Allow-system-to-manage-uid-gid-at-sta.patch
Patch: 0024-Issue-6468-CLI-Fix-default-error-log-level.patch
Patch: 0025-Issue-6768-ns-slapd-crashes-when-a-referral-is-added.patch
Patch: 0026-Issue-6430-Fix-build-with-bundled-libdb.patch
Patch: 0027-Issue-6997-Logic-error-in-get_bdb_impl_status-preven.patch
# For ELN
Patch: 0001-Issue-5120-Fix-compilation-error.patch
Patch: 0001-Issue-6929-Compilation-failure-with-rust-1.89-on-Fed.patch
%description
389 Directory Server is an LDAPv3 compliant server. The base package includes
the LDAP server and command line utilities for server administration.
@ -497,6 +551,17 @@ isn't what you want. Please contact support immediately.
Please see http://seclists.org/oss-sec/2016/q1/363 for more information.
%endif
%if %{with libbdb_ro}
%package robdb-libs
Summary: Read-only Berkeley Database Library
License: GPL-3.0-or-later WITH GPL-3.0-389-ds-base-exception AND (0BSD OR Apache-2.0 OR MIT) AND (Apache-2.0 OR Apache-2.0 WITH LLVM-exception OR MIT) AND (Apache-2.0 OR BSL-1.0) AND (Apache-2.0 OR LGPL-2.1-or-later OR MIT) AND (Apache-2.0 OR MIT OR Zlib) AND (Apache-2.0 OR MIT) AND (CC-BY-4.0 AND MIT) AND (MIT OR Apache-2.0) AND Unicode-3.0 AND (MIT OR CC0-1.0) AND (MIT OR Unlicense) AND 0BSD AND Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MIT AND ISC AND MPL-2.0 AND PSF-2.0 AND Zlib
%description robdb-libs
The %{name}-robdb-lib package contains a library derived from rpm
project (https://github.com/rpm-software-management/rpm) that provides
some basic functions to search and read Berkeley Database records
%endif
%package libs
Summary: Core libraries for 389 Directory Server (%{variant})
@ -581,18 +646,8 @@ Requires: openssl
# This is for /usr/bin/c_rehash tool, only needed for openssl < 1.1.0
Requires: openssl-perl
Requires: iproute
Requires: python%{python3_pkgversion}
Requires: python%{python3_pkgversion}-distro
Requires: python%{python3_pkgversion}-ldap
Requires: python%{python3_pkgversion}-pyasn1
Requires: python%{python3_pkgversion}-pyasn1-modules
Requires: python%{python3_pkgversion}-dateutil
Requires: python%{python3_pkgversion}-argcomplete
Requires: python%{python3_pkgversion}-libselinux
Requires: python%{python3_pkgversion}-setuptools
Requires: python%{python3_pkgversion}-cryptography
Recommends: bash-completion
%{?python_provide:%python_provide python%{python3_pkgversion}-lib389}
%description -n python%{python3_pkgversion}-lib389
This module contains tools and libraries for accessing, testing,
@ -611,8 +666,14 @@ Requires: python%{python3_pkgversion}-lib389 = %{version}-%{release}
A cockpit UI Plugin for configuring and administering the 389 Directory Server
%endif
%generate_buildrequires
cd src/lib389
# Tests do not run in %%check (lib389's tests need to be fixed)
# but test dependencies are needed to check import lib389.topologies
%pyproject_buildrequires -g test
%prep
%autosetup -p1 -v -n %{name}-%{version}
%autosetup -S git -p1 -n %{name}-%{version}
%if %{with bundle_jemalloc}
%setup -q -n %{name}-%{version} -T -D -b 3
@ -625,6 +686,8 @@ A cockpit UI Plugin for configuring and administering the 389 Directory Server
cp %{SOURCE2} README.devel
%build
# Workaround until https://github.com/389ds/389-ds-base/issues/6476 is fixed
export CFLAGS="%{optflags} -std=gnu17"
%if %{with clang}
CLANG_FLAGS="--enable-clang"
@ -678,7 +741,7 @@ pushd ../%{jemalloc_name}-%{jemalloc_ver}
--libdir=%{_libdir}/%{pkgname}/lib \
--bindir=%{_libdir}/%{pkgname}/bin \
--enable-prof %{lg_page} %{lg_hugepage}
make %{?_smp_mflags}
%make_build
popd
%endif
@ -688,6 +751,7 @@ mkdir -p ../%{libdb_base_version}
pushd ../%{libdb_base_version}
tar -xjf %{_topdir}/SOURCES/%{libdb_full_version}.tar.bz2
mv %{libdb_full_version} SOURCES
sed -i -e '/^CFLAGS=/s/-fno-strict-aliasing/& -std=gnu99/' %{_builddir}/%{name}-%{version}/rpm/bundle-libdb.spec
rpmbuild --define "_topdir $PWD" -bc %{_builddir}/%{name}-%{version}/rpm/bundle-libdb.spec
popd
%endif
@ -696,6 +760,11 @@ popd
autoreconf -fiv
%configure \
%if %{with libbdb_ro}
--with-libbdb-ro \
%else
--without-libbdb-ro \
%endif
%if %{with bundle_libdb}
--with-bundle-libdb=%{_builddir}/%{libdb_base_version}/BUILD/%{libdb_base_dir}/dist/dist-tls \
%endif
@ -717,21 +786,15 @@ autoreconf -fiv
%endif
# lib389
make src/lib389/setup.py
pushd ./src/lib389
%py3_build
%{python3} validate_version.py --update
%pyproject_wheel
popd
# argparse-manpage dynamic man pages have hardcoded man v1 in header,
# need to change it to v8
sed -i "1s/\"1\"/\"8\"/" %{_builddir}/%{name}-%{version}/src/lib389/man/dsconf.8
sed -i "1s/\"1\"/\"8\"/" %{_builddir}/%{name}-%{version}/src/lib389/man/dsctl.8
sed -i "1s/\"1\"/\"8\"/" %{_builddir}/%{name}-%{version}/src/lib389/man/dsidm.8
sed -i "1s/\"1\"/\"8\"/" %{_builddir}/%{name}-%{version}/src/lib389/man/dscreate.8
# Generate symbolic info for debuggers
export XCFLAGS=$RPM_OPT_FLAGS
make %{?_smp_mflags}
%make_build
%install
@ -739,7 +802,7 @@ mkdir -p %{buildroot}%{_datadir}/gdb/auto-load%{_sbindir}
%if %{with cockpit}
mkdir -p %{buildroot}%{_datadir}/cockpit
%endif
make DESTDIR="$RPM_BUILD_ROOT" install
%make_install
%if %{with cockpit}
find %{buildroot}%{_datadir}/cockpit/389-console -type d | sed -e "s@%{buildroot}@@" | sed -e 's/^/\%dir /' > cockpit.list
@ -756,7 +819,11 @@ cp -r %{_builddir}/%{name}-%{version}/man/man3 $RPM_BUILD_ROOT/%{_mandir}/man3
# lib389
pushd src/lib389
%py3_install
%pyproject_install
for clitool in dsconf dscreate dsctl dsidm openldap_to_ds; do
mv %{buildroot}%{_bindir}/$clitool %{buildroot}%{_sbindir}/
done
%pyproject_save_files -l lib389
popd
# Register CLI tools for bash completion
@ -802,6 +869,21 @@ cp -pa $libdbbuilddir/dist/dist-tls/.libs/%{libdb_bundle_name} $RPM_BUILD_ROOT%{
popd
%endif
%if %{with libbdb_ro}
pushd lib/librobdb
cp -pa COPYING %{_builddir}/%{name}-%{version}/COPYING.librobdb
cp -pa COPYING.RPM %{_builddir}/%{name}-%{version}/COPYING.RPM
install -m 0755 -d %{buildroot}/%{_libdir}
install -m 0755 -d %{buildroot}/%{_docdir}/%{name}-robdb-libs
install -m 0755 -d %{buildroot}/%{_licensedir}/%{name}
install -m 0755 -d %{buildroot}/%{_licensedir}/%{name}-robdb-libs
install -m 0644 $PWD/README.md %{buildroot}/%{_docdir}/%{name}-robdb-libs/README.md
install -m 0644 $PWD/COPYING %{buildroot}/%{_licensedir}/%{name}-robdb-libs/COPYING
install -m 0644 $PWD/COPYING.RPM %{buildroot}/%{_licensedir}/%{name}-robdb-libs/COPYING.RPM
install -m 0644 $PWD/COPYING %{buildroot}/%{_licensedir}/%{name}/COPYING.librobdb
install -m 0644 $PWD/COPYING.RPM %{buildroot}/%{_licensedir}/%{name}/COPYING.RPM
popd
%endif
%check
# This checks the code, if it fails it prints why, then re-raises the fail to shortcircuit the rpm build.
@ -812,6 +894,9 @@ export TSAN_OPTIONS=print_stacktrace=1:second_deadlock_stack=1:history_size=7
if ! make DESTDIR="$RPM_BUILD_ROOT" check; then cat ./test-suite.log && false; fi
%endif
# Check import for lib389 modules
%pyproject_check_import -e '*.test*'
%post
if [ -n "$DEBUGPOSTTRANS" ] ; then
output=$DEBUGPOSTTRANS
@ -824,11 +909,6 @@ fi
# reload to pick up any changes to systemd files
/bin/systemctl daemon-reload >$output 2>&1 || :
# https://fedoraproject.org/wiki/Packaging:UsersAndGroups#Soft_static_allocation
# Soft static allocation for UID and GID
# sysusers.d format https://fedoraproject.org/wiki/Changes/Adopting_sysusers.d_format
%sysusers_create_compat %{SOURCE4}
# Reload our sysctl before we restart (if we can)
sysctl --system &> $output; true
@ -918,6 +998,8 @@ exit 0
%{_mandir}/man1/ldclt.1.gz
%{_bindir}/logconv.pl
%{_mandir}/man1/logconv.pl.1.gz
%{_bindir}/logconv.py
%{_mandir}/man1/logconv.py.1.gz
%{_bindir}/pwdhash
%{_mandir}/man1/pwdhash.1.gz
%{_sbindir}/ns-slapd
@ -952,6 +1034,9 @@ exit 0
%exclude %{_libdir}/%{pkgname}/lib/libjemalloc_pic.a
%exclude %{_libdir}/%{pkgname}/lib/pkgconfig
%endif
%if %{with libbdb_ro}
%exclude %{_libdir}/%{pkgname}/librobdb.so
%endif
%files devel
%doc LICENSE LICENSE.GPLv3+ LICENSE.openssl README.devel
@ -991,18 +1076,24 @@ exit 0
%{_libdir}/%{pkgname}/plugins/libback-bdb.so
%endif
%files -n python%{python3_pkgversion}-lib389
%doc LICENSE LICENSE.GPLv3+
%{python3_sitelib}/lib389*
%files -n python%{python3_pkgversion}-lib389 -f %{pyproject_files}
%doc src/lib389/README.md
%license LICENSE LICENSE.GPLv3+
# Binaries
%{_sbindir}/dsconf
%{_mandir}/man8/dsconf.8.gz
%{_sbindir}/dscreate
%{_mandir}/man8/dscreate.8.gz
%{_sbindir}/dsctl
%{_mandir}/man8/dsctl.8.gz
%{_sbindir}/dsidm
%{_mandir}/man8/dsidm.8.gz
%{_sbindir}/openldap_to_ds
%{_libexecdir}/%{pkgname}/dscontainer
# Man pages
%{_mandir}/man8/dsconf.8.gz
%{_mandir}/man8/dscreate.8.gz
%{_mandir}/man8/dsctl.8.gz
%{_mandir}/man8/dsidm.8.gz
%{_mandir}/man8/openldap_to_ds.8.gz
%exclude %{_mandir}/man1
# Bash completions for scripts provided by python3-lib389
%{bash_completions_dir}/dsctl
%{bash_completions_dir}/dsconf
%{bash_completions_dir}/dscreate
@ -1014,5 +1105,16 @@ exit 0
%doc README.md
%endif
%if %{with libbdb_ro}
%files robdb-libs
%license COPYING.librobdb COPYING.RPM
%doc %{_defaultdocdir}/%{name}-robdb-libs/README.md
%{_libdir}/%{pkgname}/librobdb.so
%{_licensedir}/%{name}-robdb-libs/COPYING
%{_licensedir}/%{name}/COPYING.RPM
%{_licensedir}/%{name}/COPYING.librobdb
%endif
%changelog
%autochangelog

View file

@ -1,3 +1,3 @@
SHA512 (jemalloc-5.3.0.tar.bz2) = 22907bb052096e2caffb6e4e23548aecc5cc9283dce476896a2b1127eee64170e3562fa2e7db9571298814a7a2c7df6e8d1fbe152bd3f3b0c1abec22a2de34b1
SHA512 (389-ds-base-3.1.1.tar.bz2) = c6aa0aba9779bd4ed6768f140d255474bc5e02455e37db6e8273740e9be81ac90bcab4ea97e117af573cb1d3f56ddd59d063b7715d99261ebf2d497c2801bc41
SHA512 (389-ds-base-3.1.3.tar.bz2) = bd15c29dba5209ed828a2534e51fd000fdd5d32862fd07ea73339e73489b3c79f1991c91592c75dbb67384c696a03c82378f156bbea594e2e17421c95ca4c6be
SHA512 (libdb-5.3.28-59.tar.bz2) = 731a434fa2e6487ebb05c458b0437456eb9f7991284beb08cb3e21931e23bdeddddbc95bfabe3a2f9f029fe69cd33a2d4f0f5ce6a9811e9c3b940cb6fde4bf79