Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








32,556 Hits in 1.8 sec

Friendship through Fourteenth-Century Fissures: Dai Liang, Wu Sidao and Ding Henian

Anne Gerritsen
2007 Nan nü: Men, Women and Gender in China  
Dai Liang and Ding Henian became friends in Siming.  ...  When Dai Liang and Wu Sidao write their respective biographies of Ding Henian, they construct a Confucian identity for Ding.  ... 
doi:10.1163/138768007x171713 fatcat:hb52dsnslrc6dm2hkzfc72b56q

Incident cerebral microbleeds and hypertension defined by the 2017 ACC/AHA Guidelines

Yiwei Xia, Yi Wang, Lumeng Yang, Yiqing Wang, Xiaoniu Liang, Qianhua Zhao, Jianjun Wu, Shuguang Chu, Zonghui Liang, Hansheng Ding, Ding Ding, Xin Cheng (+1 others)
2021 Annals of Translational Medicine  
The cut-off for hypertension was lowered to blood pressure (BP) over 130/80 mmHg in the 2017 American College of Cardiology/American Heart Association (ACC/AHA) guideline. Whether the new definition of hypertension remains a potent risk factor of cerebral microbleeds (CMBs) is uncertain. We aimed to analyze the relationship between the new definition of hypertension and incident CMBs in a 7-year longitudinal community study. This study is a sub-study of the Shanghai Aging Study (SAS). A total
more » ... 317 participants without stroke or dementia were included at baseline (2009-2011), and were invited to repeated clinical examinations and cerebral magnetic resonance imaging (MRI) at follow-up (2016-2018). CMBs at baseline and follow-up were evaluated on T2*-weighted gradient recalled echo (GRE) and susceptibility-weighted angiography (SWAN) sequence of MRI. We classified baseline BP into four categories: normal BP, elevated systolic BP, stage 1 hypertension and stage 2 hypertension according to the ACC/AHA guideline. We assessed the associations between BP categories and incident CMBs by generalized linear models. A total of 159 participants (median age, 67 years) completed follow-up examinations with a mean interval of 6.9 years. Both stage 1 and stage 2 hypertension at baseline were significantly related with a higher risk of incident CMBs (IRR 2.77, 95% CI, 1.11-6.91, P=0.028; IRR 3.04, 95% CI, 1.29-7.16, P=0.011, respectively), indicating dose-response effects across BP categories. Participants with ≥5 incident CMBs or incident CMBs in the deep locations all had baseline stage 1 and 2 hypertension. Participants with baseline stage 1 and stage 2 hypertension had a significantly higher risk of incident CMBs in this 7-year longitudinal community cohort.
doi:10.21037/atm-20-5142 pmid:33708941 pmcid:PMC7944264 fatcat:tfifeylmwbazjjqjomqn3j67jm

Efficient manganese luminescence induced by Ce3+-Mn2+ energy transfer in rare earth fluoride and phosphate nanocrystals

Yun Ding, Liang-Bo Liang, Min Li, Ding-Fei He, Liang Xu, Pan Wang, Xue-Feng Yu
2011 Nanoscale Research Letters  
© 2011 Ding et al; licensee Springer.  ... 
doi:10.1186/1556-276x-6-119 pmid:21711641 pmcid:PMC3211164 fatcat:6baoeko6bbbxppd4vljc4xnliy

Response

Huoyan Liang, Xianfei Ding, Tongwen Sun
2019 Critical Care  
Response Huoyan Liang, Xianfei Ding and Tongwen Sun * This comment refers to the article available at https://doi.org/10.1186/s13054-019-2392-y To the Editor: We thank Dr.  ... 
doi:10.1186/s13054-019-2458-x pmid:31109369 pmcid:PMC6526608 fatcat:3l54kum4uzgy7fvsf6ytgnifw4

Scalable Stochastic Kriging with Markovian Covariances [article]

Liang Ding, Xiaowei Zhang
2018 arXiv   pre-print
Stochastic kriging is a popular technique for simulation metamodeling due to its exibility and analytical tractability. Its computational bottleneck is the inversion of a covariance matrix, which takes O(n^3) time in general and becomes prohibitive for large n, where n is the number of design points. Moreover, the covariance matrix is often ill-conditioned for large n, and thus the inversion is prone to numerical instability, resulting in erroneous parameter estimation and prediction. These two
more » ... numerical issues preclude the use of stochastic kriging at a large scale. This paper presents a novel approach to address them. We construct a class of covariance functions, called Markovian covariance functions (MCFs), which have two properties: (i) the associated covariance matrices can be inverted analytically, and (ii) the inverse matrices are sparse. With the use of MCFs, the inversion-related computational time is reduced to O(n^2) in general, and can be further reduced by orders of magnitude with additional assumptions on the simulation errors and design points. The analytical invertibility also enhance the numerical stability dramatically. The key in our approach is that we identify a general functional form of covariance functions that can induce sparsity in the corresponding inverse matrices. We also establish a connection between MCFs and linear ordinary differential equations. Such a connection provides a flexible, principled approach to constructing a wide class of MCFs. Extensive numerical experiments demonstrate that stochastic kriging with MCFs can handle large-scale problems in an both computationally efficient and numerically stable manner.
arXiv:1803.02575v1 fatcat:2ptjmgwifncrlknjvqobyg5jry

Domain Generalization by Learning and Removing Domain-specific Features [article]

Yu Ding, Lei Wang, Bin Liang, Shuming Liang, Yang Wang, Fang Chen
2022 arXiv   pre-print
Acknowledgment Yu Ding was supported by CSIRO Data61 PhD Scholarship and the University of Wollongong International Postgraduate Tuition Award.  ... 
arXiv:2212.07101v1 fatcat:ookhtpiysvgtvg7klmbzhcajpu

Improving Robust Fairness via Balance Adversarial Training [article]

Chunyu Sun, Chenye Xu, Chengyuan Yao, Siyuan Liang, Yichao Wu, Ding Liang, XiangLong Liu, Aishan Liu
2022 arXiv   pre-print
Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes, known as the robust fairness problem. Previously proposed Fair Robust Learning (FRL) adaptively reweights different classes to improve fairness. However, the performance of the better-performed classes decreases, leading to a strong performance drop. In this paper, we observed two unfair phenomena during adversarial training:
more » ... rent difficulties in generating adversarial examples from each class (source-class fairness) and disparate target class tendencies when generating adversarial examples (target-class fairness). From the observations, we propose Balance Adversarial Training (BAT) to address the robust fairness problem. Regarding source-class fairness, we adjust the attack strength and difficulties of each class to generate samples near the decision boundary for easier and fairer model learning; considering target-class fairness, by introducing a uniform distribution constraint, we encourage the adversarial example generation process for each class with a fair tendency. Extensive experiments conducted on multiple datasets (CIFAR-10, CIFAR-100, and ImageNette) demonstrate that our method can significantly outperform other baselines in mitigating the robust fairness problem (+5-10\% on the worst class accuracy)
arXiv:2209.07534v1 fatcat:dolqsuapivb6vm3452m42bmiee

Preparation and Anticorrosive Property of Soluble Aniline Tetramer

Yongbo Ding, Jia Liang, Gaofeng Liu, Wenting Ni, Liang Shen
2019 Coatings  
Figure 2 . 2 Coatings 2019,9, x; doi: FOR PEER REVIEW www.mdpi.com/journal/coatings Article Preparation and Anticorrosive Property of Soluble Aniline Tetramer Yongbo Ding *, Jia Liang, Gaofeng Liu, Wenting  ...  Ni and Shen Liang * FTIR spectra of aniline tetramer.  ... 
doi:10.3390/coatings9060399 fatcat:7ud7yaipjbcxrgjjzthfblmgti

Passive Source Localization of Sensor Arrays [chapter]

Junli Liang, Ding Liu
2012 Sensor Array  
How to reference In order to correctly reference this scholarly work, feel free to copy and paste the following: Junli Liang and Ding Liu (2012) .  ... 
doi:10.5772/35622 fatcat:52bcwq7znjervgb7wvawkwypne

A new macrocyclic diterpenoid from Anisomeles indica

Chen-Liang Chao, Hui-Chi Huang, Hsiou-Yu Ding, Jenn-Haung Lai, Hang-Ching Lin, Wen-Liang Chang
2019 Figshare  
A new macrocyclic diterpenoid, 4β,5β-dihydroxyovatodiolide (1), together with twenty-two known compounds (2-23) were isolated from the MeOH extract of the dried aerial parts of Anisomeles indica (L.) O. Kuntze (Labiatae). The structure of 1 was established on the basis of spectral evidence. Phenylethanoids, acteoside (5) and isoacteoside (6) showed significant inhibitory to IL-2 secretion of with respect to phorbol myristate acetate and anti-CD28 monoclonal antibody co-stimulated activation of human peripheral blood T cells.
doi:10.6084/m9.figshare.7887884.v1 fatcat:wz4dje52snbfbl5eezxtgblfmi

Improving Neural Machine Translation by Bidirectional Training [article]

Liang Ding, Di Wu, Dacheng Tao
2021 arXiv   pre-print
., 2016; Liang et al., 2007) .  ...  Motivated by this, Levinboim et al. (2015) ; Liang et al. (2007) propose to modelling the invertibility between bilingual languages.  ... 
arXiv:2109.07780v1 fatcat:oehjmbeurvcyfjtsh6dybehv5i

Recurrent Graph Syntax Encoder for Neural Machine Translation [article]

Liang Ding, Dacheng Tao
2019 arXiv   pre-print
Syntax-incorporated machine translation models have been proven successful in improving the model's reasoning and meaning preservation ability. In this paper, we propose a simple yet effective graph-structured encoder, the Recurrent Graph Syntax Encoder, dubbed RGSE, which enhances the ability to capture useful syntactic information. The RGSE is done over a standard encoder (recurrent or self-attention encoder), regarding recurrent network units as graph nodes and injects syntactic dependencies
more » ... as edges, such that RGSE models syntactic dependencies and sequential information (i.e., word order) simultaneously. Our approach achieves considerable improvements over several syntax-aware NMT models in EnglishGerman and EnglishCzech translation tasks. And RGSE-equipped big model obtains competitive result compared with the state-of-the-art model in WMT14 En-De task. Extensive analysis further verifies that RGSE could benefit long sentence modeling, and produces better translations.
arXiv:1908.06559v1 fatcat:aybvkay57fc35mzohayuymguuy

Pyramid Mask Text Detector [article]

Jingchao Liu, Xuebo Liu, Jie Sheng, Ding Liang, Xin Li, Qingjie Liu
2019 arXiv   pre-print
Scene text detection, an essential step of scene text recognition system, is to locate text instances in natural scene images automatically. Some recent attempts benefiting from Mask R-CNN formulate scene text detection task as an instance segmentation problem and achieve remarkable performance. In this paper, we present a new Mask R-CNN based framework named Pyramid Mask Text Detector (PMTD) to handle the scene text detection. Instead of binary text mask generated by the existing Mask R-CNN
more » ... ed methods, our PMTD performs pixel-level regression under the guidance of location-aware supervision, yielding a more informative soft text mask for each text instance. As for the generation of text boxes, PMTD reinterprets the obtained 2D soft mask into 3D space and introduces a novel plane clustering algorithm to derive the optimal text box on the basis of 3D shape. Experiments on standard datasets demonstrate that the proposed PMTD brings consistent and noticeable gain and clearly outperforms state-of-the-art methods. Specifically, it achieves an F-measure of 80.13% on ICDAR 2017 MLT dataset.
arXiv:1903.11800v1 fatcat:zleabbqd3fcznhsok5dzx5tosi

OTOV2: Automatic, Generic, User-Friendly [article]

Tianyi Chen, Luming Liang, Tianyu Ding, Zhihui Zhu, Ilya Zharkov
2023 arXiv   pre-print
., 2021b) 21-45-82-110-109-68-37-13-9-7-3- ResRep (Ding et al., 2021) , Group-HS and GBN-60 (You et al., 2019) achieve over 76% accuracy but consume more FLOPs than OTOv2 and are not automated for  ... 
arXiv:2303.06862v1 fatcat:6g6mmm5yard5dlkpxr45n3b6yq

DB-LLM: Accurate Dual-Binarization for Efficient LLMs [article]

Hong Chen, Chengtao Lv, Liang Ding, Haotong Qin, Xiabin Zhou, Yifu Ding, Xuebo Liu, Min Zhang, Jinyang Guo, Xianglong Liu, Dacheng Tao
2024 arXiv   pre-print
Large language models (LLMs) have significantly advanced the field of natural language processing, while the expensive memory and computation consumption impede their practical deployment. Quantization emerges as one of the most effective methods for improving the computational efficiency of LLMs. However, existing ultra-low-bit quantization always causes severe accuracy drops. In this paper, we empirically relieve the micro and macro characteristics of ultra-low bit quantization and present a
more » ... ovel Dual-Binarization method for LLMs, namely DB-LLM. For the micro-level, we take both the accuracy advantage of 2-bit-width and the efficiency advantage of binarization into account, introducing Flexible Dual Binarization (FDB). By splitting 2-bit quantized weights into two independent sets of binaries, FDB ensures the accuracy of representations and introduces flexibility, utilizing the efficient bitwise operations of binarization while retaining the inherent high sparsity of ultra-low bit quantization. For the macro-level, we find the distortion that exists in the prediction of LLM after quantization, which is specified as the deviations related to the ambiguity of samples. We propose the Deviation-Aware Distillation (DAD) method, enabling the model to focus differently on various samples. Comprehensive experiments show that our DB-LLM not only significantly surpasses the current State-of-The-Art (SoTA) in ultra-low bit quantization (eg, perplexity decreased from 9.64 to 7.23), but also achieves an additional 20\% reduction in computational consumption compared to the SOTA method under the same bit-width. Our code will be released soon.
arXiv:2402.11960v1 fatcat:lur42eygmjfrdjof45vuiwejmq
« Previous Showing results 1 — 15 out of 32,556 results