Blog

  • osSchedulingAlgorithms

    CPU Scheduling Algorithms in Operating Systems

    CPU Scheduling is a process of determining which process will own CPU for execution while another process is on hold.
    The main task of CPU scheduling is to make sure that whenever the CPU remains idle, the OS at least select one of the processes available in the ready queue for execution.
    The selection process will be carried out by the CPU scheduler. It selects one of the processes in memory that are ready for execution.

    Types of CPU scheduling Algorithm

    There are mainly six types of process scheduling algorithms

    Algorithm Theory Implementation
    First Come First Serve (FCFS) click me click me
    Shortest-Job-First (SJF) click me click me
    Round Robin click me click me
    Priority click me click me
    Multilevel Queue click me click me
    Shortest Remaining Time click me click me

    Scheduling Algorithms.webp

    First Come First Serve

    First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU scheduling algorithm. In this type of algorithm, the process which requests the CPU gets the CPU allocation first. This scheduling method can be managed with a FIFO queue.

    As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of the queue. So, when CPU becomes free, it should be assigned to the process at the beginning of the queue.

    Characteristics of FCFS method:

    It offers non-preemptive and pre-emptive scheduling algorithm.

    Jobs are always executed on a first-come, first-serve basis It is easy to implement and use. However, this method is poor in performance, and the general wait time is quite high.

    Shortest Remaining Time

    The full form of SRT is Shortest remaining time. It is also known as SJF preemptive scheduling. In this method, the process will be allocated to the task, which is closest to its completion. This method prevents a newer ready state process from holding the completion of an older process.

    Characteristics of SRT scheduling method:

    This method is mostly applied in batch environments where short jobs are required to be given preference.

    This is not an ideal method to implement it in a shared system where the required CPU time is unknown. Associate with each process as the length of its next CPU burst. So that operating system uses these lengths, which helps to schedule the process with the shortest possible time.

    Priority Based Scheduling

    Priority scheduling is a method of scheduling processes based on priority. In this method, the scheduler selects the tasks to work as per the priority.

    Priority scheduling also helps OS to involve priority assignments.

    The processes with higher priority should be carried out first, whereas jobs with equal priorities are carried out on a round-robin or FCFS basis. Priority can be decided based on memory requirements, time requirements, etc.

    Round-Robin Scheduling

    Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm comes from the round-robin principle, where each person gets an equal share of something in turn. It is mostly used for scheduling algorithms in multitasking. This algorithm method helps for starvation free execution of processes.

    Characteristics of Round-Robin Scheduling

    Round robin is a hybrid model which is clock-driven Time slice should be minimum, which is assigned for a specific task to be processed. However, it may vary for different processes. It is a real time system which responds to the event within a specific time limit.

    Shortest Job First

    SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the shortest execution time should be selected for execution next. This scheduling method can be preemptive or non-preemptive. It significantly reduces the average waiting time for other processes awaiting execution.

    Characteristics of SJF Scheduling

    It is associated with each job as a unit of time to complete. In this method, when the CPU is available, the next process or job with the shortest completion time will be executed first. It is Implemented with non-preemptive policy. This algorithm method is useful for batch-type processing, where waiting for jobs to complete is not critical. It improves job output by offering shorter jobs, which should be executed first, which mostly have a shorter turnaround time.

    Multiple-Level Queues Scheduling

    This algorithm separates the ready queue into various separate queues. In this method, processes are assigned to a queue based on a specific property of the process, like the process priority, size of the memory, etc.

    However, this is not an independent scheduling OS algorithm as it needs to use other types of algorithms in order to schedule the jobs.

    Characteristic of Multiple-Level Queues Scheduling:

    Multiple queues should be maintained for processes with some characteristics. Every queue may have its separate scheduling algorithms. Priorities are given for each queue.

    The Purpose of a Scheduling algorithm

    Here are the reasons for using a scheduling algorithm:

    • The CPU uses scheduling to improve its efficiency.
    • It helps you to allocate resources among competing processes.
    • The maximum utilization of CPU can be obtained with multi-programming.
    • The processes which are to be executed are in ready queue.[1]
    Visit original content creator repository https://github.com/Al-Taie/osSchedulingAlgorithms
  • demeter

    demeter

    demeter is a tool for downloading the .epub files you don’t have from a Calibre library. It does this by building a database of books it has seen based on some clever algorithms. At least, that’s the idea.

    (Demeter only allows scraping a host every 12 hours to prevent overloading the server.)

    Installation and Usage

    Download the appropriate demeter binary for your platform from the releases page.

    This is a standalone binary, there’s no need to install any dependencies.

    Move it somewhere in your $PATH so you can call it with demeter

    Add a Host

    demeter host add http://example.com:8080

    Scrape all hosts and store results in the directory ./books and only download the extension pdf

    demeter scrape run -d books -e pdf

    For the rest, use the built in help.

    This tool can be used for whatever you want, enjoy.

    important note regarding extensions

    The -e flag on the scrape run command only affects that specific run, the books are stored without any extension information in the database. In general that means that if you switch from the -e epub (default) to -e mobi, you will only download new books in the mobi extension. Books that were already present will not be re-downloaded in a different extension.

    Database

    Demeter builds an internal database that is stored in ~/.demeter/demeter.db

    Scraping

    When scraping a host, demeter does the following:

    • Use the API to collect all book ids
    • Check if there a new book ids since the previous scrape
    • Use the API to get the details for all the new book ids
    • Check the internal db if a book has already been downloaded
    • Download the book if it isn’t and add it to the internal db
    • Mark the host as scraped so it won’t do it again within 12 hours
    • If the host failed, mark it as failed and disable it after a while

    all commands

    $ demeter -h
    demeter is CLI application for scraping calibre hosts and
    retrieving books in epub format that are not in your local library.
    
    Usage:
      demeter [command]
    
    Available Commands:
      dl          download related commands
      help        Help about any command
      host        all host related commands
      scrape      all scrape related commands
    
    $ demeter dl -h
    download related commands
    
    Usage:
      demeter dl [command]
    
    Aliases:
      dl, download, downloads, dls
    
    Available Commands:
      add          add a number of hashes to the database
      deleterecent delete all downloads from this time period
      list         list all downloads
    
    $ demeter host -h
    all host related commands
    
    Usage:
      demeter host [command]
    
    Available Commands:
      add         add a host to the scrape list
      disable     disable a host
      enabled     make a host active
      list        list all hosts
      rm          delete a host
      stats       Get host stats
    
    $ demeter scrape -h
    all scrape related commands
    
    Usage:
      demeter scrape [command]
    
    Available Commands:
      run         run all scrape jobs
    
    $ demeter scrape run -h
    demeter scrape run -h
    Go over all defined hosts and if the last scrape
    is old enough it will scrape that host.
    
    Usage:
      demeter scrape run [flags]
    
    Flags:
      -h, --help               help for run
      -d, --outputdir string   path to downloaded books to (default "books")
      -n, --stepsize int       number of books to request per query (default 50)
      -u, --useragent string   user agent used to identify to calibre hosts (default "demeter / v1")
      -w, --workers int        number of workers to concurrently download books (default 10)
    

    Visit original content creator repository
    https://github.com/gnur/demeter

  • nats.ocaml

    NATS – OCaml Client

    OCaml client for the NATS messaging system.

    License Apache 2

    Warning

    In active development! You can view the progress here.

    Basics

    • CONNECT
    • INFO
    • PUB
    • HPUB
    • HMSG
    • SUB/UNSUB
    • MSG
    • PING/PONG
    • OK/ERR

    Features

    • Drain mechanism
    • Request mechanism

    Packages

    The project provides several packages with different implementations for any contexts.

    Package Description Require OCaml
    nats-client implementation independent protocol abstracts >= 4.14 (LTS)
    nats-client-lwt lwt.unix-based client implementation

    Quick start

    Installation

    Currently only a development version is available. You can pin it using the OPAM package manager.

    $ opam pin nats-client-lwt.dev https://github.com/romanchechyotkin/nats.ocaml.git

    Publish-Subscribe example

    This example shows how to publish to a subject and handle its messages.

    open Lwt.Infix
    
    let main =
      (* Create a switch for automatic dispose resources. *)
      Lwt_switch.with_switch @@ fun switch ->
      
      (* Connect to a NATS server by address 127.0.0.1:4222 with ECHO flag. *)
      let%lwt client =
        Nats_client_lwt.connect ~switch ~settings:[ `Echo ]
          (Uri.of_string "nats://127.0.0.1:4222")
      in
    
      (* Publish 'hello' message to greet.joe subject. *)
      Nats_client_lwt.pub client ~subject:"greet.joe" "hello";%lwt
    
      (* Subscribe to greet.* subject. *)
      let%lwt subscription =
        Nats_client_lwt.sub ~switch client ~subject:"greet.*" ()
      in
    
      (* Publishes 'hello' message to three subjects. *)
      Lwt_list.iter_s
        (fun subject -> Nats_client_lwt.pub client ~subject "hello")
        [ "greet.sue"; "greet.bob"; "greet.pam" ];%lwt
    
      (* Handle first three incoming messages to the greet.* subject. *)
      Lwt_stream.nget 3 subscription.messages
      >>= Lwt_list.iter_s (fun (message : Nats_client.Protocol.msg) ->
              Lwt_io.printlf "'%s' received on %s" message.payload.contents
                message.subject)
    
    let () = Lwt_main.run main

    Take it from examples/natsbyexample/publish_subscribe.ml.

    $ docker start -a nats-server
    $ dune exec ./examples/natsbyexample/publish_subscribe.exe
    'hello' received on greet.sue       
    'hello' received on greet.bob
    'hello' received on greet.pam
    

    By Examples

    See more examples at examples/ directory.

    Messaging

    References

    Contributing

    The is an open source project under the Apache 2.0 license. Contributions are very welcome! Let’s build a great ecosystem together! Please be sure to read the CONTRIBUTING.md before your first commit.

    Visit original content creator repository https://github.com/romanchechyotkin/nats.ocaml
  • netbeans

    Apache NetBeans

    Apache NetBeans is an open source development environment, tooling platform, and application framework.

    Build status

    • GitHub actions
      • Build Status
      • Profiler Lib Native Binaries
      • NetBeans Native Execution Libraries
    • Apache Jenkins:
      • Linux: Build Status
      • Windows: Build Status
    • License Status ( Apache Rat and ant verify-libs-and-licenses )
      • Build Status

    Requirements

    • Git
    • Ant
    • JDK 17 or above (to build and run NetBeans)

    Notes:

    • NetBeans license violation checks are managed via the rat-exclusions.txt file.
    • Set JAVA_HOME and ANT_HOME appropriately or leave them undefined.

    Building NetBeans

    Build the default release config (See the cluster.config property.)

    $ ant build
    

    Build the basic project (mainly Java features):

    $ ant -Dcluster.config=basic build
    

    Build the full project (may include clusters which are not be in the release):

    $ ant -Dcluster.config=full build
    

    Build the NetBeans Platform:

    $ ant -Dcluster.config=platform build
    

    Cleanup:

    $ git clean -Xdf
    

    Notes:

    • You can also use php, enterprise, etc. See the cluster.properties file.
    • Once built, you can simply open individual modules of interest with NetBeans and run/rebuild/debug them like any other project
    • The nbbuild directory should contain the portable NetBeans zip

    Generating Javadoc

    Build javadoc:

    $ ant javadoc
    

    Notes:

    • On JDK 24 or later, building javadoc may require to raise the jaxp entity limit by setting export ANT_OPTS=-Djdk.xml.totalEntitySizeLimit=200000
    • Run javadoc-nb task in Netbeans to run the javadoc build and display it in a web browser.

    Running NetBeans

    Quick test run:

    $ ant tryme
    

    or run the portable zip distribution:

    1. extract the zip found in nbbuild to a directory other than nbbuild
    2. run the netbeans launcher found in netbeans/bin, optionally specifying a custom userdir

    example:

    $ netbeans --userdir /tmp/nbtestdir1
    

    Some useful Launcher Options

      --jdkhome <path>      path to JDK used as runtime (and default JDK for projects)
      --userdir <path>      use specified directory to store user settings
      --cachedir <path>     use specified directory to store user cache, must be different from userdir
      --fontsize <size>     set the base font size of the user interface, in points
      -J<jvm_option>        pass <jvm_option> to JVM
      --help                list more options
    

    Get In Touch

    Download

    Reporting Bugs

    Log, Config and Cache Locations

    • start config (JVM settings, default JDK, userdir, cachedir location and more):
      netbeans/etc/netbeans.conf
    • user settings storage (preferences, installed plugins, logs):
      system dependent, see Help -> About for concrete location
    • cache files (maven index, search index etc):
      system dependent, see Help -> About for concrete location
    • default log location (tip: can be inspected via View -> IDE Log):
      $DEFAULT_USERDIR_ROOT/var/log/messages.log

    Note: removing/changing the user settings directory will reset NetBeans to first launch defaults

    Other Repositories

    Full History

    The origins of the code in this repository are older than its Apache existence. As such significant part of the history (before the code was donated to Apache) is kept in an independent repository. To fully understand the code you may want to merge the modern and ancient versions together:

    $ git clone https://github.com/apache/netbeans.git
    $ cd netbeans
    $ git log platform/uihandler/arch.xml

    This gives you just few log entries including the initial checkin and change of the file headers to Apache. But then the magic comes:

    $ git remote add emilian https://github.com/emilianbold/netbeans-releases.git
    $ git fetch emilian # this takes a while, the history is huge!
    $ git replace 6daa72c98 32042637 # the 1st donation
    $ git replace 6035076ee 32042637 # the 2nd donation

    When you search the log, or use the blame tool, the full history is available:

    $ git log platform/uihandler/arch.xml
    $ git blame platform/uihandler/arch.xml

    Many thanks to Emilian Bold who converted the ancient history to his Git repository and made the magic possible!

    Visit original content creator repository https://github.com/apache/netbeans
  • ec-sign-js

    ec-sign-js

    Build License

    ECDSA cryptographic signature library for JavaScript.

    Elliptic Curve Hash Algorithm
    P-256(Default) SHA256(Default)
    P-384 SHA384
    secp256k1 SHA256

    Requirements

    • Node.js 18 or higher

    How to install

    npm i ec-sign

    How to use library

    Generate keypair

    const sign = require('ec-sign')
    
    // Synchronous
    const keypair = sign.SignUtils.generateKeyPairSync('secp224r1');
    
    // Asynchronous
    const keypair = await sign.SignUtils.generateKeyPair('secp224r1');

    Converts public key to PEM

    const pubPem = sign.SignUtils.toPem(keypair.publicKey);
    
    console.info(pubPem);
    // -----BEGIN PUBLIC KEY-----
    // MFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAEm1eVSAq73aR2Oo8L8rvDzBU214+uhgIj
    // MkiasZgxKDJtMbGosVVCPd8drgkr3NrZ1Eqhrf0mveProOsJdaF5Ag==
    // -----END PUBLIC KEY-----

    Converts private key to PEM

    const priPem = sign.SignUtils.toPem(keypair.privateKey);
    
    console.info(priPem);
    // -----BEGIN PRIVATE KEY-----
    // MIGEAgEAMBAGByqGSM49AgEGBSuBBAAKBG0wawIBAQQgH4RMksnOnI68DAm0PzqQ
    // rtS1oznTSsb/pVDQLNPguqShRANCAASbV5VICrvdpHY6jwvyu8PMFTbXj66GAiMy
    // SJqxmDEoMm0xsaixVUI93x2uCSvc2tnUSqGt/Sa94+ug6wl1oXkC
    // -----END PRIVATE KEY-----

    Sign with data and private key

    const signer = new sign.Signer(priPem);
    const result = signer.sign("hello, message");
    
    console.info(result.timestamp);
    // 1688895463045
    
    console.info(result.signature);
    // MEYCIQCeYobZ2BIoL7jCV4eGYrT/yXGtNLhEFY2MchsIDGCsywIhAMwak6nBiHgJsNfuY2zSdcX235Xy7Ucj2bGMvFh/xdTy

    Verify signature with data, timestamp and public key

    const verifier = new sign.Verifier(pubPem);
    const valid = verifier.verify("hello, message", result.timestamp, result.signature.toString());
    
    console.info(`signature was verified: ${valid}`);
    // signature was verified: true

    How to build from source

    prerequisites

    node.js, npm, git need to be installed.

    git clone https://github.com/rising3/ec-sign-js.git
    cd ec-sign-js
    npm i
    npm run test
    npm run build

    License

    Apache 2.0

    Visit original content creator repository https://github.com/rising3/ec-sign-js
  • Check

    ✅ Check – Simulado

    Esse site teve como objetivo a simulação das provas Encceja, Exame Nacional para Certificação de Competências de Jovens e Adultos, tendo como principal objetivo amparar pessoas que pretendem aplicar o exame, com resultados dos simulados, matérias que precisam estudar com base de 4,5% de assertividade por matéria. Para a construção foi utilizado Reac.Js, Typescript e Bootstrap para o front-end, Node.JS, Express e a lib pdf2json para o back-end, está hospedado na Vercel e o código-fonte está disponível no meu GitHub.

    🚀 Começando

    Essas instruções permitirão que você obtenha uma cópia do projeto em operação na sua máquina local para fins de desenvolvimento e teste.

    🔧 Instalação

    Uma série de exemplos passo-a-passo que informam o que você deve executar para ter um ambiente de desenvolvimento em execução.

    Baixando o projeto:

    git clone https://github.com/maikeg-alves/Check.git
    

    instalando as dependecias:

    yarn install 
    

    Execultando:

    yarn dev 
    

    📦 Implantação

    Visite esse site: CLIQUE AQUI

    🛠️ Construído com


    ⌨️ com ❤️ por Maicon Gabriel ALves 😊

    Visit original content creator repository
    https://github.com/maikeg-alves/Check

  • ether-goblin

    Ether Goblin

    GitHub Actions

    A microservice for the Ethereum ecosystem.

    Features

    • APIs RESTFul APIs for the Ethereum ecosystem
    • Watchdog A simple balance Watchdog
      • Low/High balance alert (reach limit)
      • Balance change alert
      • Alert mail with PGP
      • Callback
    • Event ERC721 event
      • Event Listener A listener to be triggered for each ERC721 event
      • Event Fetcher Fetching ERC721 events from block history
      • Support Mint/Transfer/Burn event
    • Out-of-the-box NFT APIs (under development)
      • Event Fetcher APIs
        • GetTokenHistory
    • API authorization via 2FA token
    • Microservice run in Docker

    Supported Chains

    Development Environment

    • typescript 4.9.3
    • node v18.12.1
    • ts-node v10.9.1
    • yarn v1.22.19

    Contract Dependencies

    • @openzeppelin/contracts: 4.8.0

    Quick Guide

    • Install dependency

      yarn
    • Build code

      Install all dependencies and compile code.

      make build
    • Build docker image

      make docker
    • Run

      • Params

        • --config Config filepath. Example:

          ts-node ./src/main/index.ts --config ./conf/app.config.yaml
      • Run code directly by ts-node

        yarn dev-run --config ./conf/app.config.yaml
      • Run compiled code by node

        yarn dist-run --config ./conf/app.config.yaml
    • Clean

      make clean

    Roadmap

    • Documents
    • ERC721(NFT) APIs
    • ERC20 APIs
    • WebSite
    • Improve Watchdog performance

    License

    MIT

    Visit original content creator repository https://github.com/jovijovi/ether-goblin
  • gocq-spring-boot-starter

    gocq-spring-boot-starter

    Maven Central GitHub release Java support License GitHub Stars GitHub Forks user repos GitHub Contributors 在下的QQ群

    又一个基于 go-cqhttpSpringBoot、反向websocket 的 QQ 机器人框架SDK

    lz1998项目进行了拙劣的模仿

    本项目基于 SpringBoot 2.7.7 版本开发

    1. 插件开发

    1.1 创建一个Spring boot项目 并且导入 Maven 依赖项

    <dependency>
        <groupId>top.nkdark</groupId>
        <artifactId>gocq-spring-boot-starter</artifactId>
        <version>1.0.0</version>
    </dependency>

    1.2 编写插件

    1.2.1 编写示例

    Java 示例

    package org.example.bot;
    
    import org.jetbrains.annotations.NotNull;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    import org.springframework.stereotype.Component;
    import top.nkdark.gocq.bot.Bot;
    import top.nkdark.gocq.bot.BotPlugin;
    import top.nkdark.gocq.proto.GroupMessageEvent;
    import top.nkdark.gocq.proto.PrivateMessageEvent;
    import top.nkdark.gocq.util.CQCode;
    
    /**
     * 示例插件
     * 1. 插件继承自 BotPlugin
     * 2. 添加 @Component 注解
     */
    @Component
    public class JavaTestPlugin extends BotPlugin {
    
        /**
         * 可选,日志工具
         * 也可使用 lombok 的 @Slf4j 注解
         */
        private final Logger log = LoggerFactory.getLogger(JavaTestPlugin.class);
    
        /**
         * 收到私聊消息时会调用这个方法
         * 
         * @param bot   机器人对象,用于调用API,例如发送私聊消息 sendPrivateMsg
         * @param event 事件对象,用于获取消息内容、群号、发送者QQ等
         * @return  是否继续调用下一个插件, `MatchedAndBlock` 表示不继续, `NotMatch` 表示继续
         */
        @Override
        public int onPrivateMessage(@NotNull Bot bot, @NotNull PrivateMessageEvent event) {
            // 获取 发送者QQ 和 消息内容
            long userId = event.getUserId();
            String msg = event.getMessage();
            // 控制台打印
            log.info(msg);
            // 将收到的消息复读
            bot.sendPrivateMsg(userId, msg, false);
            // 继续执行下一个插件
            return NotMatch;
        }
    
        /**
         * 收到群消息时会调用这个方法
         * 
         * @param bot   机器人对象,用于调用API,例如发送私聊消息 sendGroupMsg
         * @param event 事件对象,用于获取消息内容、群号、发送者QQ等
         * @return  是否继续调用下一个插件, `MatchedAndBlock` 表示不继续, `NotMatch` 表示继续
         */
        @Override
        public int onGroupMessage(@NotNull Bot bot, @NotNull GroupMessageEvent event) {
            // 获取 消息内容 群号 发送者QQ
            String msg = event.getMessage();
            long groupId = event.getGroupId();
            long userId = event.getUserId();
            // 控制台打印日志
            log.info(msg);
            // 编辑回复内容
            String respContent = CQCode.INSTANCE.at(114514) + "1919810";
            // 通过 bot 实例,调用api发送消息至群聊
            bot.sendGroupMsg(groupId, respContent, false);
            // 不继续执行下一个插件
            return MatchedAndBlock;
        }
    
        /**
         * 收到频道消息时会调用这个方法
         * 
         * @param bot   机器人对象,用于调用API,例如发送私聊消息 sendGroupMsg
         * @param event 事件对象,用于获取消息内容、群号、发送者QQ等
         * @return  是否继续调用下一个插件, `MatchedAndBlock` 表示不继续, `NotMatch` 表示继续
         */
        @Override
        public int onGuildMessage(@NotNull Bot bot, @NotNull GuildMessageEvent event) {
            log.info(event.toString());
            return MatchedAndBlock;
        }
    }

    Kotlin 示例

    package org.example.bot
    
    import org.slf4j.LoggerFactory
    import org.springframework.stereotype.Component
    import top.nkdark.gocq.bot.Bot
    import top.nkdark.gocq.bot.BotPlugin
    import top.nkdark.gocq.proto.GroupMessageEvent
    import top.nkdark.gocq.proto.PrivateMessageEvent
    import top.nkdark.gocq.proto.guild.GuildMessageEvent
    import top.nkdark.gocq.util.CQCode
    
    /**
     * 示例插件
     * 1. 插件继承自 BotPlugin
     * 2. 添加 @Component 注解
     */
    @Component
    class TestPlugin : BotPlugin() {
    
        /**
         * 可选,日志工具
         * 也可使用 lombok 的 @Slf4j 注解
         */
        private val log = LoggerFactory.getLogger(TestPlugin::class.java)
    
        /**
         * 收到私聊消息时会调用这个方法
         * 
         * @param bot   机器人对象,用于调用API,例如发送私聊消息 sendPrivateMsg
         * @param event 事件对象,用于获取消息内容、群号、发送者QQ等
         * @return  是否继续调用下一个插件, `MatchedAndBlock` 表示不继续, `NotMatch` 表示继续
         */
        override fun onPrivateMessage(bot: Bot, event: PrivateMessageEvent): Int {
            // 获取 发送者QQ 和 消息内容
            val userId = event.userId
            val msg = event.message
            // 将收到的消息复读
            bot.sendPrivateMsg(userId, msg)
            // 将日志打印至 debug
            log.debug("复读了一条来自 ${event.sender.nickname} 的消息")
            return NotMatch
        }
        
        /**
         * 收到群消息时会调用这个方法
         * 
         * @param bot   机器人对象,用于调用API,例如发送私聊消息 sendGroupMsg
         * @param event 事件对象,用于获取消息内容、群号、发送者QQ等
         * @return  是否继续调用下一个插件, `MatchedAndBlock` 表示不继续, `NotMatch` 表示继续
         */
        override fun onGroupMessage(bot: Bot, event: GroupMessageEvent): Int {
            // 获取 消息内容 群号 发送者QQ
            val msg: String = event.message
            val groupId: Long = event.groupId
            val userId: Long = event.userId
            // 控制台打印日志
            log.info(msg)
            // 编辑回复内容
            val respContent = CQCode.at(114514) + "1919810"
            // 通过 bot 实例,调用api发送消息至群聊
            bot.sendGroupMsg(groupId, respContent, false)
            return MatchAndBlock
        }
    
        /**
         * 收到频道消息时会调用这个方法
         * 
         * @param bot   机器人对象,用于调用API,例如发送私聊消息 sendGroupMsg
         * @param event 事件对象,用于获取消息内容、群号、发送者QQ等
         * @return  是否继续调用下一个插件, `MatchedAndBlock` 表示不继续, `NotMatch` 表示继续
         */
        override fun onGuildMessage(bot: Bot, event: GuildMessageEvent): Int {
            log.info(event.toString())
            return MatchedAndBlock
        }
    }

    1.2.2 修改配置文件

    修改 resource/application.yml

    server:
      port: 12345 # 端口号请根据情况自行安排
    
    spring:
      cq:
        # 请将编写的插件全类名添加在这里
        # 在收到消息时会按顺序依次调用以下插件
        # 如果前面的插件返回 MatchedAndBlock 则不会继续调用后续插件
        # 如果前面的插件返回 NotMatch 则会继续调用后续插件
        plugin-list:
          - org.example.bot.TestPlugin
          - org.example.bot.JavaTestPlugin

    1.3 打包部署

    1. 使用 maven 打包应用
    mvn clean package
    1. 打包好的 jar 包将会出现在 ${项目路径}/target 目录下

    1.4 运行程序

    java -jar xxx.jar

    Todo

    • go-cqhttp Api
    • 文档/注释
    • 自定义合并消息的构建方法 (现在还得用户自行构建
    • 测试全部api (希望小伙伴们都帮着测测

    Reference

    pbbot-spring-boot-starter

    Spring-CQ

    Visit original content creator repository https://github.com/NKDark/gocq-spring-boot-starter
  • rica

    Rica

    DOI

    As part of the ME-ICA pipeline, Rica (Reports for ICA) provides a reporting tool for ICA decompositions performed with tedana and aroma.

    Pronunciation: [ˈrika]. For an audio recording on how to pronounce Rica see here.

    About

    Rica originally came out as an alternative to the reports provided by tedana, with the aim of making manual classification of ICA components possible. At the same time, the tool aspires to be of value for ICA decompositions made with tools other than tedana. Rica assumes you’re working with files that mimic the outputs of tedana.

    How to use Rica

    Even if Rica is designed to be simple to use, you might want to see how you can use the app by watching this tutorial video.

    Rica also supports keyboard shortcuts on the ICA components page. You can use the following shortcuts:

    • a: Accept component.
    • r: Reject component.
    • i: Ignore component.
    • left arrow: Go to previous component.
    • right arrow: Go to next component.

    Using Rica online

    Just head over to https://rica-fmri.netlify.app and have fun!

    Using Rica locally

    Installation

    Rica can be installed by cloning this repository and executing the following command in the cloned repository:

    npm install

    In order to run the tool locally, two options exist:

    1. Using a localhost

    By executing the npm start command in the cloned repository, Rica will open in a new browser tab at http://localhost:3000 and you will be able to use the tool.

    2. Compiling the tool

    You could also compile the project so that you can use the tool just by opening an HTML file. For that, it is necessary to execute the following commands in the cloned repository.

    npm run build
    npx gulp
    mv build/index.html build/rica.html
    open build/rica.html

    Pro tip: when you open rica.html for the first time, BOOKMARK IT 😉

    Getting involved

    Want to learn more about our plans for developing Rica? Have a question, comment, or suggestion? Open or comment on one of our issues!

    Visit original content creator repository https://github.com/ME-ICA/rica
  • tetgenpy

    tetgenpy

    main PyPI version

    tetgenpy is a python wrapper for Hang Si‘s TetGen – A Quality Tetrahedral Mesh Generator and a 3D Delaunay Triangulator. It helps to prepare various types of inputs – points, piecewise linear complexes (PLCs), and tetmesh – for tetrahedron mesh generation based on simple python types, such as list and numpy.ndarray.

    Install

    pip install tetgenpy

    For current development version,

    pip install git+https://github.com/tataratat/tetgenpy.git@main

    Quick Start

    Following is an example for tetrahedralization of a unit cube defined as PLCs. Alternatively, you could also use tetgenpy.PLC class to prepare TetgenIO.

    import tetgenpy
    import numpy as np
    
    # tetrahedralize unit cube
    # define points
    points=[
        [0.0, 0.0, 0.0],
        [1.0, 0.0, 0.0],
        [0.0, 1.0, 0.0],
        [1.0, 1.0, 0.0],
        [0.0, 0.0, 1.0],
        [1.0, 0.0, 1.0],
        [0.0, 1.0, 1.0],
        [1.0, 1.0, 1.0],
    ]
    
    # define facets - it can be list of polygons.
    # here, they are hexa faces
    facets = [
        [1, 0, 2, 3],
        [0, 1, 5, 4],
        [2, 0, 4, 6],
        [1, 3, 7, 5],
        [3, 2, 6, 7],
        [4, 5, 7, 6],
    ]
    
    # prepare TetgenIO - input for tetgen
    tetgen_in = tetgenpy.TetgenIO()
    
    # set points, facets, and facet_markers.
    # facet_markers can be useful for setting boundary conditions
    tetgen_in.setup_plc(
        points=points,
        facets=facets,
        facet_markers=[[i] for i in range(1, len(facets) + 1)],
    )
    
    # tetgen's tetraheralize function with switches
    tetgen_out = tetgenpy.tetrahedralize("qa.05", tetgen_in)
    
    # unpack output
    print(tetgen_out.points())
    print(tetgen_out.tetrahedra())
    print(tetgen_out.trifaces())
    print(tetgen_out.trifacemarkers())

    This package also provides access to tetgen’s binary executable. Try:

    $ tetgen -h

    Working with vedo

    vedo natively supports tetgenpy.TetgenIO types starting with version >=2023.5.1. It is A python module for scientific analysis and visualization of эd objects that can be used to enhance further workflows. You can find an example (same as above) here or simply try:

    pip install vedo
    vedo -r tetgen1

    Contributing

    Write an issue or create a pull request! A simple guideline can be found at CONTRIBUTING.md

    Visit original content creator repository https://github.com/tataratat/tetgenpy