util
Checkpointer
Source code in src/ursa/util/__init__.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | |
from_path(db_path)
classmethod
Make checkpointer sqlite db.
Args
- db_path: The path to the SQLite database file (e.g. ./checkpoint.db) to be created.
Source code in src/ursa/util/__init__.py
19 20 21 22 23 24 25 26 27 28 29 30 | |
diff_renderer
DiffRenderer
Renderable diff—console.print(DiffRenderer(...))
Source code in src/ursa/util/diff_renderer.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | |
helperFunctions
run_tool_calls(ai_msg, tools)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ai_msg
|
AIMessage
|
The LLM's AIMessage containing tool calls. |
required |
tools
|
ToolRegistry | Iterable[Runnable | Callable[..., Any]]
|
Either a dict {name: tool} or an iterable of tools (must have |
required |
Returns:
| Name | Type | Description |
|---|---|---|
out |
list[BaseMessage]
|
list[BaseMessage] to feed back to the model |
Source code in src/ursa/util/helperFunctions.py
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | |
logo_generator
generate_logo_sync(*, problem_text, workspace, out_dir, filename=None, model='gpt-image-1', size=None, background='opaque', quality='high', n=1, overwrite=False, style='sticker', allow_text=False, palette=None, mode='logo', aspect='square', style_intensity='overt', console=None, image_model_provider='openai', image_provider_kwargs=None)
Generate an image. Default behavior matches previous versions (logo/sticker). To create a cinematic illustration, set mode='scene' and consider aspect='wide'.
Source code in src/ursa/util/logo_generator.py
476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 | |
memory_logger
AgentMemory
Simple wrapper around a persistent Chroma vector-store for agent-conversation memory.
Parameters
path : str | Path | None
Where to keep the on-disk Chroma DB. If None, a folder called
agent_memory_db is created in the package’s base directory.
collection_name : str
Name of the Chroma collection.
embedding_model :
Notes
- Requires
langchain-chroma, andchromadb.
Source code in src/ursa/util/memory_logger.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 | |
add_memories(new_chunks, metadatas=None)
Append new text chunks to the existing store (must call build_index
first if the DB is empty).
Raises
RuntimeError If the vector store is not yet initialised.
Source code in src/ursa/util/memory_logger.py
101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 | |
build_index(chunks, metadatas=None)
Create a fresh vector store from chunks. Existing data (if any)
are overwritten.
Parameters
chunks : Sequence[str]
Text snippets (already chunked) to embed.
metadatas : Sequence[dict] | None
Optional metadata dict for each chunk, same length as chunks.
Source code in src/ursa/util/memory_logger.py
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | |
retrieve(query, k=4, with_scores=False, **search_kwargs)
Return the k most similar chunks for query.
Parameters
query : str
Natural-language question or statement.
k : int
How many results to return.
with_scores : bool
If True, also return similarity scores.
**search_kwargs
Extra kwargs forwarded to Chroma’s similarity_search* helpers.
Returns
list[Document] | list[tuple[Document, float]]
Source code in src/ursa/util/memory_logger.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 | |
delete_database(path=None)
Simple wrapper around a persistent Chroma vector-store for agent-conversation memory.
Parameters
path : str | Path | None
Where the on-disk Chroma DB is for deleting. If None, a folder called
agent_memory_db is created in the package’s base directory.
Source code in src/ursa/util/memory_logger.py
168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 | |
parse
extract_json(text)
Extract a JSON object or array from text that might contain markdown or other content.
The function attempts three strategies
- Extract JSON from a markdown code block labeled as JSON.
- Extract JSON from any markdown code block.
- Use bracket matching to extract a JSON substring starting with '{' or '['.
Returns:
| Type | Description |
|---|---|
list[dict]
|
A Python object parsed from the JSON string (dict or list). |
Raises:
| Type | Description |
|---|---|
ValueError
|
If no valid JSON is found. |
Source code in src/ursa/util/parse.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | |
extract_main_text_only(html, *, max_chars=250000)
Returns plain text with navigation/ads/scripts removed. Prefers trafilatura -> jusText -> BS4 paragraphs.
Source code in src/ursa/util/parse.py
346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 | |
read_text_file(path)
Reads in a file at a given path into a string
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
string filename, with path, to read in |
required |
Source code in src/ursa/util/parse.py
415 416 417 418 419 420 421 422 423 424 | |
resolve_pdf_from_osti_record(rec, *, headers=None, unpaywall_email=None, timeout=25)
Returns (pdf_url, landing_used, note) - pdf_url: direct downloadable PDF URL if found (or a strong candidate) - landing_used: landing page URL we parsed (if any) - note: brief trace of how we found it
Source code in src/ursa/util/parse.py
213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 | |
plan_renderer
render_plan_steps_rich(plan_steps, highlight_index=None)
Pretty table for a list of plan steps (strings or dicts), with an optional highlighted row.
Source code in src/ursa/util/plan_renderer.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | |