Skip to content

Tags

Tags give the ability to mark specific points in history as being important
  • v1.0.19

    7b16ade7 · cal9 uademo fix copy ·
    Release: v1.0.19
    - RPM fixes for installers
    - documentation accordingly updated
  • v1.0.18

    - builds for [ cal9, cc7 ] X [ open6, ua, uademo ] available, images updated
    - renamed the rpms in a more explicit way to distinguish the versions
    - tested with cal9.2
    - JCOP production
  • v1.0.17

    v1.0.17
    - rpm and bin available for 
       - cc7: open6, ua, uademo
       - cal9: open6
    - quasar 1.5.18, boost 1.81.0 / 1.75.0 (shared linkage) and cal9, cc7 images updated
    - improved PSconnected field, flips to fals if no acquisition for >50secs OR lxi comm down
    - HWconnected shows only lxi comm status, unrelieable
    - renamed all artifacts for more clarity: OpcUaLxi*<os>.<toolset>
    - fixed quasar bug in Server/CMakeLists.txt, added src/QuasarUaTraceHook.cpp # code is empty for open6, ifdefs
  • v1.0.16

    the reconnection behavior will be of "type A": if the network is lost, but power stays ON, and the network comes back. The socket is kept.
    -
    details
    ----------
    the network socket (tcp/ip) with it's lxi session is NOT reconnected on the PLH side, even when a lxi_disconnect followed by a lxi_connect is made. This stems from the aimTTi firmware. On the server, when doing a lxi_(re-)connect, the new session blocks on a mutex on the tcp/ip lxi comm layer, which means basically that the PLH does not reply. After a long (unknown) time the PLH might drop the session and wait for a new request. The  session-expiry delay is not documented for the PLH. If the opcua-lxi makes a new session request when the PLH is not ready then the server thread for that PLH concerned is stuck forever on the (tcp/ip-) session mutex.
    With the new "behavior A", the opcua-lxi server ignores transmission errors and does not close/reopen the socket (lxi disconnect, connect) and just KEEPS the session it has. The communication with the PLH comes back after about 30secs when network is restored. If the socket is not used for a longer time the server's OS might clean it up somehow and the communication would be entirely lost. Then a server and PS restart is needed: this corresponds to a full power cycle anyway. It is recommended to connect the PS power throuugh a PDU in any case so that the power cycling can be done remotely.
  • v1.0.15

    - ENS3221: reconnection issues improved: the ID reply does not always come up correctly after power up
      Protect against that with a loop trying 10 times. Seems to work OK, but did not fix the problem really.
    - still, reconnection with PL is shaky, further investigations ("reset")
  • v1.0.14

    Release: v1.0.14
    - increase certificate validity in ServerConfig.xml from 1 year to 10 years
    - some mini-code fixes
    - increased reconnection delay from 1000ms to 3000ms to increase chance ( https://its.cern.ch/jira/browse/ENS-32211 ). Also improved logging and a more strict reconnected flag in all cases.
  • v1.0.13

    b372da3b · triggered ·
    Release: v1.0.13
    - CPX400DP also added to the code
    - config.xml has to have "none" in the current command, as documented
  • v1.0.12

    b10b9daf · changelog ·
    Release: v1.0.12
  • v1.0.11

    98f95cf7 · release pdf ·
    Release: v1.0.11
    VERSION v1.0.11 FOR JCOP EDMS: https://edms.cern.ch/document/2581500/0.1 - updated the python test for asyncio: python3.9.5 and asyncio may2021 - updated the image for the asyncio tests: gitlab-registry.cern.ch/mludwig/docker:opcua-asyncio.cc7 - checked the ports, OK at 23600
  • v1.0.10

    7cff16b7 · 1.0.10 ·
    Release: v1.0.10
    - solving OPCUA-2272: .open6 should load ServerConfig.xml to avoid port clashes and make it configurable. Also cleanup remaining issues with .open6 build
    - solving OPCUA-2124 alongside: quasar 1.5.1
    - lxi default port is 23600 as per ServerConfig.xml
    - C++17
    - no other functional changes, just CI, quasar update and port config
  • v1.0.9

    Release: v1.0.9
    - increased lxi comm timeout from 1000ms to 5000ms to avoid connection problems, it seems to work more stable
  • v1.0.8

    Release: v1.0.8
    - increased delay to 1s between LXI commands for the aimTTi PL series
    - default for all unknown is 2 seconds
  • v1.0.7

    Release: v1.0.7
    - improved reporting of connect/disconnect: added field PSConnected
    - increased comm delay from 100ms to 150ms (SLOW) since occasionally a few timeouts still pop up with the qmd404
    - documentation appended accordingly
    - still on quasar 1.4.2
  • v1.0.6

    Release: v1.0.6
    - tag v1.0.6
    - further improved delays for communication: have an asymmetric delay now in the main poling loop which is 800ms + 3*lxi comm delay
    - the lxi comm delay is per ps type, for aimTTi PL series it is 100ms
    - counter is alive also when trying to reconnect
  • v1.0.5

    Release: v1.0.5
    cpack finally works:
    to install:
    sudo rpm -iv ./RPM/OpcUaLxiServer-1.0.5.open6.x86_64.rpm
    to uninstall:
    sudo rpm -e opcualxiserver-1.0.5-1.x86_64
    package goes into /opt on target machine
    delay adjustment for specific PS types work also, small bug corrected.
  • v1.0.4

    284189d7 · fix master CI ·
    Release: v1.0.4
    fixing communication delay (hopefully!)
    - updated all builder images
    - improved slightly code for comm delays
    - comm delays per PS type, hardcoded
    - liblxi shared linkage for ua, uademo
    - liblxi static for open6
  • v1.0.3

    244237cf · 1.0.3: delay added ·
    https://its.cern.ch/jira/browse/OPCUA-2044
    added a delay 30ms for all lxi commands to avoid saturation and disconnect from the aim-tti PS. This delay is globally hardcoded right now, but of course it can be refined "per family" later on if needed.
  • v1.0.2

    Release: v1.0.2
    just a minor fix version, no functionality change. The buildchain has changed somehow, related to compat. This patch fixes the build problems for now. But should go to quasar 1.4.2 anyway soon
    - open6 compat at 1.3.0, hardcoded ugly fix in cc7_open6.sh and gitlab_cc7_open6.sh
    - some new problems with build chain, two small fixes in scripts and .cmake toolchains
    - endpoint message for server/open6
    - no idea why ua licensed pipeline actually runs, leave it for now
    - testing version to enrico for fwLxiPs development
  • v1.0.1

    6741bfcc · . ·
    Release: v1.0.1
    Ci corrected, minor fixes in scripts
  • v1.0.0

    ef4e3a64 · cleanup ·
    Release: v1.0.0
    first release v1.0.0, based on quasar 1.4.0
    - all three toolkits (open6, ua, uademo) tested and ok, downloaded from artifacts
    - python asyncio script for ramping and subscription
    - documentation updated
    - some minor bugs corrected
    - status for PS and channel added, fields for individual bits
    - CI all stages dockerized and OK
    - RPM artifacts available, but not tested